The post HCL Roundtable: Emerging AppSec Trends in 2021 and Beyond appeared first on CIO Tech Asia.
]]>In an era where data is more valuable than gold, cyberattacks in its multiple manifestations have dominated global headlines with account hijacking and injection attacks. Be it credential theft, brute force attacks, social engineering thefts or access control misconfiguration, the sophistication and damages are rising by the day. While the type of attackers may vary between ‘spray and pray’ opportunistic ones looking for easy pickings or high-profile targeted government espionagers, the business impact by these breaches is rummaging through to trillion of dollars.
The shift to a remote workforce in 2020 has highlighted the need for an approach to app development that has security built-in from inception. In the current digital landscape, security is essential to achieving business resiliency and maintaining quality while developing at the speed of DevOps. Prioritising speed without security in app development can lead to an uptick in critical vulnerabilities with disastrous results. To avoid this, organisations must address security earlier in the software development life cycle.
Cybersecurity Ventures1 media notes several eye-opening statistics which puts into perspective the importance of security in the new normal. Cybercrime damage costs are predicted to hit $6 trillion annually by 2021 and ransomware attacks on healthcare organisations — often called the No. 1 cyber-attacked industry — expected to quadruple. Cybersecurity Ventures expects that a business will fall victim to a ransomware attack every 11 seconds in 2021, up from every 14 seconds in 2019. This makes ransomware the fastest growing type of cybercrime. The recent attacks by SolarWinds and FireEye underscores that no organisation is immune to threats and attacks. Attackers are looking for ways to evade IT attention, bypass defences, and exploit emerging weaknesses. The fallout from this attack will likely capture a large proportion of attention of governments and Fortune 500 cybersecurity teams in 2021 and will result in rollout of more stringent cybersecurity policies especially targeting supply chain vulnerabilities.
A few hundred years ago, the Greeks, Romans and other mighty empires prided themselves on building impenetrable fortresses around their kingdom to protect themselves from outside invaders. There seems to be a similar theme when corporations in the 21st century invest heavily in perimeter security to insulate their ‘business data’ empire from outside threats. To some extend this does protect attackers from infiltering, however with the advent of a new era of Apps and IoTs and now with an accelerated change to ways of working during a global pandemic, the concept of only having a best-in-class perimeter security has received a timely wake-up call.
“Application security is really a partnership. In the past years, security has often been seen as a silo. And what we’ve learned along the way is that we need to have better alignment between software development and the security that we’re trying to put into it. And we have to be able to build that in throughout versus trying to bolt it on at the end.”, notes Robert Cuddy, Global AppScan Evangelist, HCL Software. He says “We need to understand and identify risk earlier when it’s easier to mitigate it and it certainly costs less to do that earlier in the process. That encompasses a whole gamut of things around visibility, reducing false positives – which we spend an awful lot of time doing, providing information for targeted remediation etc.” Cuddy observes that while we go with defence in depth and put in firewalls, network security and identity access management among a host of other things, organisations have to think both ‘outside-in’ and ‘inside-out’ which is where the application security piece comes into effect.
In his insightful blog2 Cuddy, rightfully observes that Security needs to be a business enabler, not just a gatekeeper. That means the Security professional needs to have alignment to the business. He goes on to explain that when great security practices are well-integrated throughout the software development lifecycle (SDLC), and meaningful, actionable feedback is provided to teams at all stages then risk is better monitored, managed, minimised and mitigated.
Security is a fundamentally foundational need that everyone involved in developing software has to embrace. This starts with developers and QA who have to provide data on whether a software or app can be made secure before committing to a release. The DevSecOps team must actively identify and manage risk through proactive planning, developing agile methods for continuous testing and making security a part of the overall product strategy. How to achieve Agile AppSec requires a focus on usability and accessibility to ensure the end user experience is functional, intuitive, and secure. The DevSecOps team must enable continuous testing and incorporate security as part of the process from design, development, through testing, and into the DevOps cycle.
Thinking like a hacker is probably the best way to do threat modelling to create mitigation strategies and security controls. Some of the usual threats to applications are broken authentication and session management, cross-site scripting (XSS), security misconfiguration, injection, cross-site request forgery (CSRF) etc. Being compliant, contrary to popular beliefs, does not make the environment ‘secure’. In fact, there is sometimes a false sense of security when compliance is not achieved with the right context of risk and threat mitigation.
The levels of sophistication and pace of attacks by malicious actors are increasing rapidly and security teams are doing their best to respond and recover from these attacks. The problem that analysts are facing are high volumes of alerts and noises which might more often than not be a false positive. A whitepaper3 by Netsparker finds that eventually developers and testers lose faith in vulnerability scanners that generate false alarms, and they begin to ignore a whole class of problems over which the scanner triggers false alarms. A vulnerability report means additional work, say 2-3 problems are reported as false alarms by certain tools, and human nature dictates that everyone starts ticking boxes and making mistakes, going so far as to consider a single false alarm as a huge problem of magnitude. Worse, if one of the remaining problems is a critical vulnerability that goes unnoticed, it will send a flood of false alarms into production without being caught and repaired, at high cost for later manual testing.
However, when dealing with a false positive, a lot more testing can be necessary until the developer decides that it’s a false alarm. Crucially, someone has to take personal responsibility for ruling against the scanner and signing off code where potentially serious issues have been flagged as false alarms.
In an agile development environment, automation is king – and manual security processes are not a feasible option at scale. DevOps and CI/CD teams rely on their automated tools to do the legwork so they can focus on tasks that require the creativity and problem-solving skills of highly qualified specialists. False positives in vulnerability testing can force testers and developers to put their streamlined automated processes on hold and laboriously review each false alarm just like a real vulnerability.
False positives can also be detrimental to team dynamics. Every time the security team reports a vulnerability, the developers have extra work investigating and fixing the issue, so reliability and mutual trust are crucial to maintaining good relations. This makes false alarms particularly aggravating, and if the vulnerability scan results burden the developers with unnecessary workloads, the working relationship may quickly turn sour. The dev team may start treating the security people as irritating timewasters, leading to an “us vs. them” mentality – with disastrous consequences for collaboration and the entire software development lifecycle.
The National Institute of Standards & Technology (NIST) conducted a series of studies on the effectiveness of Static Application Security Testing (SAST) tools. The study4 revealed that on average, AppSec tools have a false positive rate of an astonishing 30% of which another 36% was insignificant. False positives have been identified as one of the leading obstacles to implementing tools, with 90% of developers willing to accept false positives at a rate of 5%. The false positive issue creates an obstacle to the introduction of AppSec tools for developers.
Some guidelines to best practices in AppSec5 noted by CBTnuggets are shared below and these can be evolved to best suit the needs of your organisation in the ever changing fast and furious world of Information Technology:
When it comes to advancing DevOps practices and patterns for enterprises, human transformation is the most critical success factor. According to Jayne Groll, CEO of the DevOps Institute and author of the 2020 Upskilling Report6, “With the rise of hybrid (remote/in-office) product teams, upskilling and online training initiatives will expand. As the pressure continues to rise to sell products and services through e-commerce sites, apps, or SaaS solutions, the lines between product and engineering teams will rapidly blur, giving rise to cross-functional, multidisciplinary teams that must learn and grow together. Each member will need to develop a wider combination of process skills, soft skills, automation skills, functional knowledge, and business knowledge, while maintaining deep competency in their focus areas. Product and engineering teams will be measured on customer value delivered, rather than just features or products created “. He continues to explain that traditional upskilling and talent development approaches won’t be enough for enterprises to remain competitive because the increasing demand for IT professionals with core human skills is escalating to a point that business leaders have not yet seen in their lifetime. This beckons an update for our humans through new skill sets as often, and with the same focus, as our technology.
References
Tags: AppSecCyberattackDevOpsHCL SoftwareRansomware
The post HCL Roundtable: Emerging AppSec Trends in 2021 and Beyond appeared first on CIO Tech Asia.
]]>The post Synopsys Roundtable: Hard lessons to be learnt from the SolarWinds attack appeared first on CIO Tech Asia.
]]>On Dec 11, 2020, the SolarWinds Orion security breach, a.k.a. SUNBURST, impacted numerous U.S. government agencies, business customers and consulting firms. Hackers managed to plant a backdoor in the SolarWinds system which is widely used across both Government and Private organisations. The malicious software looked legitimate because it was signed using the SolarWinds certificate. This is known as a supply-chain attack because it infects software as it’s under assembly. The compromised update has had a sweeping impact, the scale of which keeps growing as new information emerges.
18,000 is a staggering number of customers who accounted for the toll of a meticulously planned attack that resulted from a SolarWinds hack during the period of March-December 2020. SolarWinds is an information technology management firm based in Austin, Texas who had led the US market share for their network management system software (NMS) since 2017. Their NMS, Orion that monitors and analyses operations, helps businesses manage their systems, networks and infrastructures. As reported to the SEC (Securities and Exchange Commission), around 18,000 of their customers downloaded the March update of Orion which had been compromised. The list includes 80% of the Fortune 500 companies and more concerningly United States government organisations like the Treasury Department, Commerce Department, Department of Homeland Security, etc. to name a few.
While investigations are still underway, the full extent of the impact is yet to be deciphered. The hackers got into the code-building environment and, in a very sophisticated way, were able to insert a backdoor into SolarWinds’ Orion network management software code. The most concerning thing is that even the Department of Defense, Microsoft or Cisco were not able to catch the breach and was eventually uncovered by a cybersecurity company FireEye, who reported to SolarWinds that their code was tainted. While the US government called the rogue group Advanced Persistent Threat 29, or APT29, they also go by the name ‘Cozy Bear’, but these are only still speculations as there hasn’t been solid proof of the identity of the hackers yet. In fact, even though the sophistication of these attacks is beyond comprehension, like in the recent Twitter fiasco, where surprisingly the culprits turned out to be bunch of teenagers, security hacking has almost become ‘child’s play’ for some and rebounding nightmares for corporations.
Vulnerable third-party products are providing attackers with a foothold and hence the labelling of this as ‘supply chain attack’ which brings the third-party software into the spotlight with this incident. It is also clear now that the attackers more often than not, remain undetected within the network and probably got in using the gap of a weak password. FireEye first got whiff of the breach when a login into a mobile device was reported from an unrecognised location and credential. This demonstrated a simple use of MFA that caught one of the most destructive attacks ever seen. What hasn’t been estimated yet is the extent of the damage or the breadth of the lateral movement as they left no artifacts from their code and effectively covered their tracks to perfection.
“From a risk management standpoint, it is of paramount importance to identify your crown jewels and segregate the home environment into different pieces and build really strong gates around it” says Ashwath Reddy, Principal Consultant, Synopsys Software Integrity Group (SIG)”. Reddy also notes the importance of Vendor management by understanding their security requirements (pen test or source code review) and reviewing and negotiating on the contracts with them. As containers become a common method of packaging and deploying apps, securing them have become a big priority for DevOps engineers. Thus, scanning and audit of images and containers for bugs and vulnerabilities has become crucial for DevSecOps who play an important role in adding security in the DevOps processes.
With increasing pressure to build and release software faster than ever before, security controls that should be addressed early in the software development life cycle (SDLC) are often not addressed until it’s far too late.
Failing to build security controls into applications in the design phase causes:
By creating threat models for external assets and components like APIs, cloud infrastructure, and hosted data centres, you can begin to anticipate new forms of attacks and prioritise application risks by factors such as threats by likelihood. An architectural risk assessment dives deeper by mapping and analysing the correlation between threats, internal assets, and design structure to expose system flaws scattered throughout your application’s architecture. Examining your application’s design through threat modelling and architectural risk assessments helps to uncover design flaws early in the SDLC that traditional testing methods often miss.
In today’s threat landscape, it is usually a matter of ‘when’, to find out about the breach that occurred within the network and in many instances the organisation only hasn’t discovered it yet despite deploying prevention strategies and technology. This calls for a change in mindset to an ‘assume breach’ mentality than a prevention focussed one. It guides design decisions, security investments and operational security practices. ‘Assume Breach’ treats both internal and external (network, identities and services) as not secure and probably already compromised.
The total impact of a potential security event is usually measured by the blast radius. To have strategies in place for isolation, segmentation and least privilege in IAM are crucial in preventing further lateral movement post breach. The blast radius is usually larger in a cloud environment resulting in catastrophic damage to businesses. At the dawn of the COVID-19 crisis, there was a massive haste in businesses moving to the cloud and while the cloud providers advertise strong compliance and security measures, security is a mutual/shared responsibility. Rushing into cloud migrations and spinning up servers recklessly, exposes companies to a host of threats like insecure interfaces, platform misconfigurations, unauthorised access and account jacking to name a few.
John Kindervag, principal analyst at Forrester Research Inc. created in 2010, the now famous model of Zero Trust Network or Zero Trust Architecture. There is an imminent shift in CIOs and CISO’s today to evolve from the ‘protect the perimeter’ mindset while realising that most of the worst attacks are activated once the attacker gains access inside the networks and moves internally without resistance.
Cybersecurity Ventures1 media notes several eye-opening statistics which puts into perspective the importance of security in the new normal. Cybercrime damage costs are predicted to hit $6 trillion annually by 2021 and ransomware attacks on healthcare organisations — often called the No. 1 cyber-attacked industry — expected to quadruple. Cybersecurity Ventures expects that a business will fall victim to a ransomware attack every 11 seconds by 2021, up from every 14 seconds in 2019. This makes ransomware the fastest growing type of cybercrime. The recent attacks by SolarWinds and FireEye underscores that no organisation is immune to threats and attacks. Attackers are looking for ways to evade IT attention, bypass defences, and exploit emerging weaknesses. The fallout from this attack will likely capture a large proportion of attention of governments and Fortune 500 cybersecurity teams in 2021 and will result in rollout of more stringent cybersecurity policies especially targeting supply chain vulnerabilities.
With no internal or external users or machines being automatically trusted, the Zero Trust network assumes that there are attackers both within and outside the network. One way of ensuring this is by the least-privilege access to users to give them only sufficient access to complete their pertinent task and no more, thus minimising their exposure to sensitive areas of the network. MFA (Multi-factor authentication) has been proving to prevent attackers by needing to use multiple devices to login and hence providing added security. Controls on device access enables minimising the threat surface by authorising every device, every time access is required.
As we witness exponential growth in software applications, the security threats have shown an equal rise in numbers. This calls for effective and efficient security measures by using best practices and tools. The most common tools are:
For over a decade, the Building Security In Maturity Model (BSIMM) report2 has provided a measuring stick and blueprint to help CISOs and security teams compare the maturity of their programs against those of their peers. Measurements and benchmark data are derived from organisations participating in the BSIMM, so it provides a direct line of sight into the real AppSec program strategies being practiced today. Application security isn’t simply about deploying tools and running tests. It’s about aligning people, process, and technology to address application security risks holistically.
Synopsys had recently published the Complete Application Security Checklist3
In the sojourn around the security jungle, having a road map is a key to successful navigation. While Open source or in-house developed apps are on the upward trajectory, the attacks exploiting vulnerabilities in open-source code libraries have also increased. The choices are many and for companies, the faster and sooner in the software development process they can find and fix security issues, the safer enterprises will be.
References:
Tags: AppSecCyber BreachCybersecurityDevSecOpsmalicious attackSASTSolarWinds AttacksunburstSynopsys
The post Synopsys Roundtable: Hard lessons to be learnt from the SolarWinds attack appeared first on CIO Tech Asia.
]]>The post Cofense Roundtable: How to Save Time and Resources with Advanced Phishing Automation appeared first on CIO Tech Asia.
]]>New phishing variants are created daily to evade email gateway security solutions. And once in an environment, they can remain undetected for months. Well-staffed security teams often spend up to 80% of their time analysing phishing threats but with a continuously increasing volume a significant number are unassessed. Other organisations do not have the budget, expertise, or resources required to monitor evolving threat tactics incorporating effective social engineering, patch vulnerabilities on legacy systems, or evolve out-dated workflows.
The biggest win for hackers is when users fall for ‘genuine looking’ links in phishing emails and unwittingly enter their credentials and in many cases, their single-sign-on (SSO) details. This is called credential harvesting, and it’s a frequently used tactic by threat actors. In fact, according to the Cofense 2021 Annual State of Phishing Report, among the millions of emails the Cofense Phishing Defense Center (PDC) analysed, they determined that more than 57% were credential phish.
Once a threat is inside the network, it is important to quickly identify, analyse and remediate the threat. Often the best way to identify threats that have evaded gateway security technology is to rely on the intuition of the people who receive the threat. Accordingly, end user education and training based on real phishing threats is of paramount importance to give security teams visibility of threats that weren’t blocked at the gateway. But reporting is only half of the equation. The second half is analysing the suspected phish and taking appropriate action. A security service like Cofense’s Managed Detection and Response is a great solution. “The PDC has five locations around the world, including a new operational centre based in Melbourne. Our team of highly trained experts manages the incident response and remediation for some of our largest customers globally with millions of end users reporting various threats that evade popular SEGs from Symantec, Proofpoint and Microsoft” notes Dalton Cole, Director of Sales, Australia and New Zealand, Cofense®. “Our team reviews thousands of phishing emails each day that have been reported in environments protected by the leading SEG vendors with an average of 3500 per customer each year”. Dalton also provides some tips and guidance on how to solve the phishing problem:
A good phishing detection and response (PDR) platform can detect threats, respond quickly and integrate threat intel onto existing security stacks being used. Cofense delivers a Phishing Detection and Response platform that includes the following solutions:
Cofense combines advanced automation technology with over 26 million people around the world reporting suspected phish. When an attack is detected in one organisation, the intelligence is used to stop attacks in other organisations across Cofense’s network of customers.
“Attackers are diversifying the malware used in phishing campaigns and finding new ways to monetize phishing. In 2020, Cofense identified a major diversification in malware families prominent in phishing which brought an unprecedented amount of disruption, directly leading to an increase in both volume and variety of threat activity. Threat actors continued to advance their tactics, techniques, and procedures to ensure their emails would reach end users throughout the year. Emotet was effectively overtaken by another banking trojan – Trickbot who spread via malicious spam campaigns like spear phishing emails disguised as unpaid invoices or account information updates” says Ryan Jones, Sales Director Asia Pacific, Cofense.
Jones also notes that one tactic Cofense increasingly observed over the past year is the use of multi-stage websites for the user to navigate, also known as layering, that leveraged safe domains. As email security technology adds to and evolves their ability to detect malicious URLs within emails, threat actors are exploiting the use of popular services. These services are often deemed as safe or business critical and are not blocked or restricted.
This is where the network effect comes into the forefront. These attacks can be prevented with the help of a community focused on identifying and fighting these threats Ryan Jones stated, “there’s a lot of data that can be shared to prevent attacks reaching other organisations. The benefit to organisations utilizing the network effect of shared intel will ensure threats known to one, are also known to others that haven’t been attacked yet or haven’t yet reported the attack.”
A report by Forrester [1] noted that enterprise security teams are ‘drowning in alerts’ as the average security-operations group gets more than 11,000 security alerts daily. While the manpower shortages are something businesses cannot solve quickly, advancements in technology are here today. As reported by Not-for-profit AustCyber’s research, Australia is facing a skills shortage of 18,000 cybersecurity experts by 2026 as the nation fights unprecedented attacks on business, government and critical infrastructure. CISOs and CIOs have a daunting task at hand to identify and retain talent in-house while continuing to staff for the increased demand. In many cases, it is not practical to maintain a team of highly skilled, security specialists that focus on simple to very complex incident response activities. A hybrid approach of outsourcing low-value tasks and keeping high value tasks in-house helps develop skilled specialists that stay for longer periods. Tasks that are more repetitive, monotonous and time consuming in nature is generally outsourced or automated, such as level one monitoring or analysis of phishing.
The push towards automation started several years ago and the COVID-19 pandemic has accelerated the release of automation software and systems that are capable of handling processes and actions that were never contemplated before. In terms of cyber security, leveraging Artificial Intelligence, and Machine Learning have enabled fast response to threats often without human intervention.
Cyber criminals target the weakest link in the security chain to get the fastest and largest return on their investment. In most cases, the weakest link is people. With remote work becoming the norm during the pandemic for many organizations, there has been a significant rise in the number of phishing emails and credential harvesting targeting remote workers. There is a need for robust management of threats and to out-human attackers by investments in smarter technology. Legacy software-only security systems are unable to keep up with innovative, human-designed phishing attacks. This calls for the use of advanced Phishing Detection and Response (PDR) platforms, innovated by Cofense, that pairs people with progressive technology to quickly identify phishing campaigns, verify high priority threats and stop the attack within minutes – not days. Organisations managing high volumes of threat assessment with limited resources or expertise can leverage Cofense’s Managed PDR, a managed security service that proactively defends against emerging threats that they see which others don’t. Combined with 26 million people around the world actively identifying and reporting suspected phish, the network effect of using threat intel to stop attacks at every other organisation has become a reality through automation.
As threats evolve and increase in volume, people, process and technology must adapt to automate tasks wherever possible, helping resources to shift their focus on critical matters, retain quality resources and reduce operational costs along the way.
All third-party trademarks referenced herein whether in logo form, name form or product form, or otherwise, remain the property of their respective holders, and use of these trademarks in no way indicates any relationship between Cofense Inc. (“Cofense”) and the holders of the trademarks. Any observations contained in this article regarding circumvention of end point protections are based on observations at a point in time based on a specific set of system configurations. Subsequent updates or different configurations may be effective at stopping these or similar threats.
The Cofense® and PhishMe® names and logos, as well as any other Cofense product or service names or logos displayed on this blog are registered trademarks or trademarks of Cofense Inc.
[1] Forrester – Noted from article by Threatpost
Tags: Advanced Phishing AutomationCofensecredential harvestingCybersecurityPDRphishing detection and response
The post Cofense Roundtable: How to Save Time and Resources with Advanced Phishing Automation appeared first on CIO Tech Asia.
]]>The post ITRS Roundtable: Modelling your digital transformations into the Hybrid and Distributed Cloud appeared first on CIO Tech Asia.
]]>Today’s complex IT environments make maintaining ‘the always-on’ availability more challenging than ever before, even as IT has become central to most business operations. Maintaining uninterrupted business operations has become more complicated as IT environments involve a complex mixture of hybrid and cloud infrastructure, middleware, and application technologies.
In a digital ‘always-on’ age, corporates and organisations have moved many offline processes online, yet the complexity of the online environment has made it more and more difficult to manage. Putting Digital Transformation at the heart of your business exposes your customers to your entire IT stack as well as gaps in operational resiliency.
Many CIOs fail to realise that Digital Transformation journey and the move to dynamic environments increase the rate of change and creates more opportunities for failure. This increased complexity leads to multiple points of failure in the stack and to more outages and cost in maintaining both the technology stack and customer satisfaction.
In a recent survey, 72% responded that digital transformation has been the biggest driver of cloud deployments, and 85% agreed that hybrid is the ideal operating model to remake themselves for the digital future. The rising demand of hybrid and distributed cloud is largely driven by the benefits – security, flexibility, and the need to accommodate multiple cloud options.
Nevertheless, with applications living on either traditional IT, on-premises private Cloud, offpremises private Cloud, or public Cloud environments, the trend of hybrid and distributed cloud adoption together with the complexities brought by digital transformation and dynamic environments, are expected to continue being one of the biggest challenges to many of the corporations in the region.
As business processes become more digital and IT failure becomes a business failure, the rush to digital can place sudden demands and networking system capacity lead to misconfigured clouds and other oversights, including security. So, companies these days need tools that can identify and predict IT failures or performance degradation to enable greater operational resilience.
“People have been talking about digital transformation for at least a last decade”, notes Peter Duffy, Head of Product Management, ITRS Group, “and what we’re seeing in the current wave of digital transformation is an increasing rate at which various previously offline or hybrid processes are moving to the digital world. And I think one of the things that characterises this wave of digital transformation is that these are not necessarily processes that were completely offline now moving online, but perhaps a monolithic architecture in your own data center which are moving to cloud or elastic environments moving to dynamic environments or to Microservices architectures(MSA) . And what we’re seeing is a number of smaller pieces moving online and being used to support multiple business processes in a Microservices architecture.” Duffy further expands the effects of digital transformation into:
With the demands of the marketplace to improve efficiency and toplines, businesses are pivoting at unprecedented pace to rollout ‘best in class’ products and services in double quick time. Technology enables that but complexities of a monolithic architecture vs microservices have taken front seat in ‘future proofing’ debates and investment decisions.
The advantages of MSA are also heavily connected to technologies and methodologies like Private Cloud (seeking ‘elasticity’), Docker – containers, DevOps for collaboration, Agile etc. The design for isolated failures without impacting other services is a key feature which is gathering increasing trust and confidence. Delayed Investment decisions are circumventing around issues of lost revenues, reputational damages and regulatory repercussions, which ultimately carve direct correlation of IT success to business success.
Decentralisation had reared its head as a buzzword in 2020 from blockchain to Jeff Bezos’s “twopizza rule” of having decentralised teams (teams shouldn’t be bigger than 2 pizzas can feed) concepts. And this certainly resonates with the MSA approach of developing a single application as a suite of small services, each running its own process, independently deployable.
With the advent of cloud computing and IoT (internet of things), there is a growing impetus in the industry for cost optimisation leading the CAPEX vs OPEX decisions and while server budgets are seeing declines, digital enterprises are opting for the elastic cloud world. One of the things that we are seeing is a significant overspending in the public cloud environment.
A RightScale report1 on the state of the cloud noted that over 35% of spend in public cloud is wasted. There certainly is a need to increase the breadth of monitoring to get a clear visibility of applications where potentially, customers, clients, partners are accessing services from outside at all times of the day. Companies can even purchase relational database as a service on AWS and leverage cloud computing to handle administrative tasks including database setup, hardware provisioning, patching and data backups, freeing up time from non-strategic tasks.
Security, control and visibility are key elements to consider while moving into a hybrid cloud model. While cloud providers do support the management systems for one aspect of the hybrid cloud, that might not extend to systems running in other public clouds or on-prem. As workloads may comprise of multiple applications on different environments, it is critical to have a clear and complete understanding of what is happening with that workload to generate an end to end visibility. Many CIOs have adopted DevOps and Cloud as a way to deploy and update applications more rapidly.
However, they have retained their traditional approach to sizing the infrastructure required. Application teams over spec the Cloud environments they initially provision to be sure of a successful application launch. This static approach to Cloud sizing and buying, where the infrastructure capacity is defined once and then rarely re-visited, does not take advantage of the Cloud’s greatest asset – flexibility. The critical issue is the failure to follow the established steps in the DevOps life cycle to resize the infrastructure supporting their applications during each release cycle.
The TCO (total cost of ownership) of cloud storage is more often than not, higher than what vendors advertise and to get a full picture of the costs, the various direct costs including the storage, egress fees, access fees and replication fees needs to be factored in. Not to mention the indirect costs like cloud data monitoring, data security, backup and data migration costs. This calls for rightsizing storage by data-types, identifying over-provisioned resources and right-sizing the workloads and using automation, selecting the correct pricing plans and re-platforming existing deployments with newer cloud services, resulting in cost optimisation over time to minimise cloud spending wastage.
1 RightScale State of the Cloud Report report1
Cloud monitoring helps to analyse patterns and detect potential security risks in the cloud infrastructure. With multiple elements in the cloud infrastructure, the several aspects that needs constant monitoring to ensure business continuity and cost optimisation are:
Three key steps noted by ITRS2 to analyse estate requirements and optimise cloud Resources are :
Guy Warren, CEO, ITRS Group, likened the advent of cloud discussions to the debates on how electricity could come into buildings on wire about 100 years ago, at the time a generator within the
building premises was the only source of electricity.
The possibility of getting an external energy source by wire seemed unfathomable and this is a similar stage in some cloud discussions today. The business case, economics and logic is too powerful to prevent the inevitability of moving workloads to the cloud and it would only be a matter of time. It is true that there are numerous teething issues and pains including regulatory restrictions that are delaying the complete adoption for many businesses especially in the financial sector.
With the global pandemic raising questions to IT for agility, accessibility and flexibility, there has been a renewed focus on the operational demands shifting how businesses leverage cloud computing into the future to remain competitive and scalable. Serverless Architecture, AI platforms, Edge computing, DevSecOps, Open source etc are some trends being cited into 2021 and beyond that are gathering steam and taking cloud computing into the 4th industrial revolution that blurs the boundaries between the physical, digital, and biological worlds.
Gartner3 predicted that by 2021, over 75% of midsize and large organizations will have adopted a multi-cloud or hybrid IT strategy. This will give rise to cloud native technologies and some of the components that will make the cloud native technology stack are Serverless computing, Orchestration platforms (like Kubernetes) and containerisation (to transport workload between multiple channels). Cloud adoption is truly a journey we have embarked on and not a destination per se.
Rather than the focus to make technology faster and flexible, enterprises should not forget that the real reason of investment and cloud adoption is to drive business growth and beat competition which depends on strategic business outcomes while leveraging the latest tech capabilities.
2 ITRS2 – ITRS Group
3 Gartner3 – 5 approaches to Cloud Application integration
Tags: automationcloud hostingdigital transformationITRSmicroservices
The post ITRS Roundtable: Modelling your digital transformations into the Hybrid and Distributed Cloud appeared first on CIO Tech Asia.
]]>The post HCL Roundtable: Securing Endpoints and Ensuring Compliance appeared first on CIO Tech Asia.
]]>Enterprises are under increasing pressure to protect data from breaches in an aggressive attack environment. The global pandemic has compounded these challenges, with companies embarking on the largest-ever work-from-home experiment. The shift has forced enterprises to rethink how security is delivered and ensured. That includes critical tasks, such as patch management, compliance and reducing overall risk. Enterprises are re-evaluating their security software and toolkits to adjust for new threats and risks but with an eye on reducing complexity. Budgets are also a top concern. While the global economy slows, threats are not abating. Security professionals must find new ways to solve problems and reduce risks on tight budgets, which means the tools must perform.
While the spotlight in 2020 was firmly on the global COIVD-19 pandemic, the shift in worker’s location from a secure office perimeter to their homes created a massive global cybersecurity risk as well. These resulted from the bad actors also pivoting to infiltrate and breach networks to extract valuable data that was previously safe in the echelons of an office network. Some of the examples that made the news headlines were:
There is an exponential rise in security breaches reported. One of the breaches that made global news due to the reach of their platform was the compromise of Twitter accounts held by high profile celebrities. This brought to the forefront the concept of social engineering and how easily a few teenagers were able to gain access to those social media accounts. IT hygiene and the education of employees on the risks and effects has been highlighted to be of paramount importance on this journey of businesses trying to protect their biggest asset – Data. Especially in cases of ransomware, it is important to have detection capabilities and full visibility across the business operations which is the only way to systematically shutdown or quarantine the infected devices.
IT professionals have divulged that one of the most siloed and poorly coordinated actions are in software patch management leading to vulnerabilities leading to being highly susceptible to cyber-attacks. With a burgeoning network, manual risk assessment, inadequate staffing and skills, the centralising and automation of the patching strategy has seen a rise in advocacy across the globe. Browser security, Device control (regulating peripheral devices), Application control (black-white-grey listing of applications), BitLocker management and Vulnerability management and threat mitigation are multiple ways that the entry points of cyber criminals are blocked. All these also need to be executed with proper MFA and strict VPN policy implementation to connect to the company’s network by new normal way of ‘remote working’.
“With a dramatic shift to new ways of working at organisations, there is an increased complexity in managing corporate and BYO devices to ensure that they are protected and make them visible”, says Matthew Burns, Director, BigFix, Asia Pacific & Japan at HCL Software. “The word that sometimes confuses people is continuous compliance, while compliance and security policies are sometimes based on a milestone achievements of doing certain things to check the status. CISOs in organisations are now determining how to keep control of a distributed workforce to make sure that they’re secure and report back to the Board or CEO on the protected position. There are a lot of the commonalities around the same challenges as organisations in Japan, India, Malaysia, Singapore, Philippines and Australia are all facing the same problem.”
A recent notable breach sent warning bells across the world on how vulnerable even top companies are to the possibilities of cybercrime. The breach was undetected for months and the hacked code was sent out in software updates to its customers which created a backdoor to customers information technology systems which hackers then used to install malware enabling them to spy on those companies. The breach has served as a rude wake-up call to the cybersecurity industry and instituted the mindset of acting at all times as if there was already breaches in your network than reacting to attacks when found.
Humans have, time and again been tagged to be the weakest link in the security chain. A security embedded culture being fostered and implemented in organisations is key to completing the circle on systems and automation in the secure development lifecycle. From constant awareness messaging, posters etc. and combining that with reward and recognition accelerates the continuous improvement and protection vision for the institution. From keeping the awareness drive as a fun activity to enforcing accountability for decisions they make inculcates ownership and makes security everyone’s responsibility.
Suppliers, external users, smartphones, tablets, laptops and a myriad of IoT devices are connected to networks that are increasingly complex to manage, dispersed and heterogeneous and each asset is a potential attack point. “Visibility, control and smart automation are discussed respectively with organisations who embark on the discovery and endpoint protection journey with us. The primary discussion is always about visibility to uncover the endpoints, finding the softwares run on them and the operating systems used. With a lot of industries built on acquisitions and takeovers as part of their growth strategies, this bring new companies into an organisation with a whole lot of different assets”, says Matthew Burns, Director, BigFix, Asia Pacific & Japan at HCL Software. “We have more often than not, uncovered over-deployment of tools and consolidating those toolsets resulted in immediate cost savings as also better management. Our endpoint management platform enables IT Operations and Security teams to fully automate discovery, management and remediation – whether its on-premise, virtual, or cloud – regardless of operating system, location or connectivity.”
Monetary Authority of Singapore (MAS) has been planning to introduce changes to Technology Risk Management (TRM) and Business Continuity Management (BCM) guidelines that were first established in 2013 and 2003, respectively, which will require financial organisations to implement more measures, including cyber surveillance, to boost operational resilience. “A cyber-attack can result in a prolonged disruption of business activities. Threats are constantly present and evolving in sophistication. We cannot afford to be complacent. Financial institutions must therefore remain vigilant and have in place effective technology risk management practices and robust business continuity plans to ensure prompt and effective response and recovery.”, said Tan Yeow Seng, MAS Chief Cyber Security Office. There were three key categories of amendments.
These amended guidelines represents a strong step towards further strengthening the defences of Singapore’s financial ecosystem, placing the industry in good stead for the post-COVID economic recovery as also emphasising the ever evolving governments regulatory control and serious intent to fight cybercrimes in Asia and globally.
With more suppliers and service providers touching sensitive data, the attack surface of enterprises have changed drastically in the recent years. A third-party or value-chain attack occurs when your system is infiltrated by a partner or outside party who has access to your systems and data. A prime example would be the NotPetya malware that compromised a Ukrainian accounting software which disrupted operations of global corporations like a global integrated shipping company and a large delivery services company who had used that firm as a third party. While these kinds of breaches are not new, the nation state actors are getting more and more sophisticated in the tools they use to infiltrate enterprise networks and steal information and damage systems. In pursuit of cost savings, process efficiency and market differentiation in service delivery, corporations are increasing the use of third-party suppliers in the execution of their growth strategies. An oversight of the supplier risk management framework to evolve to the sourcing changes will result in painful commercial, regulatory and reputational risks, some of which might render the new supplier advantages into a rather disruptive situation. Hence the need to evaluate every outsourcing decision with the decision making and risk management framework. Vendor relationship complexity obfuscates cybersecurity risk in an interconnected IT ecosystem.
In the interest of responding quickly to the pandemic instituted changes to ways of working, CISOs and CSOs have rolled out measures to ensure business continuity. Remote working has been established in industries that needed to quickly pivot from a 100% office based workforce to an almost completely remote based workforce and this has certainly put pressure on VPNs that experienced increased workloads. The fiscal budgets for 2021 are expected to shrink from various factors, the prominent one being a decrease in revenue due to the pandemic. However, with the ever increasing threats of cyber security breaches, CISOs would not compromise on investing in key priorities like Remote access, Next-gen identity and access controls, Automation, Security education and training, Third party security and Perimeter security. This trend is only seeing an increase with the changes in consumer behaviour in consumption of products and services online which creates a demand for organisations to pivot and enable digital tools and services that are secure and reliable for their customers.
IT security compliance can be looked at from a benefits angle to present the many advantages to the overall business growth, some of which are:
In today’s threat landscape, best practices in patch management is of paramount importance to prevent security incidents that create disruption in the business operations. Windows patch management strategies along with 3rd party software sustained by an ongoing patching process along with advanced email security, DNS filtering and Privileged Access Management have been key to support the traditional firewalls and antivirus tools in protecting the valuable assets of businesses. This incorporates creating an asset register, scheduled patch management planning and deployment, consistent testing and reporting, and ultimately automating the process resulting in closing vulnerabilities, sustaining a secure IT environment and freeing up skilled IT security resources to be able to deal with more progressive and critical security issues. Burns noted that a lot of people jump into building a very comprehensive security strategy while not having the patch management sorted. He compared it to designing chandeliers in the house when a solid foundation have not been built first. Patching is a core component in discussions and the state of the organisation can be quickly found out in the discovery phase around the patch.
Cyber solutions, like other technologies are also heading towards real-time predictive methods like machine learning (ML) and artificial intelligent (AI) to analyse and act with speed. Next-Gen endpoints are able to use real-time detection enabled by cloud to effectively thwart high volume, multi-stage attacks targeting endpoints with features typically include automated detection and response (ADR) and endpoint detection and response (EDR). They also include ransomware protection and behavioural analysis which enhances prevention and protection capabilities to increase efficiency, efficacy and ease of use. Autonomous endpoints that can self-heal and regenerate operating systems and configurations are the future of cybersecurity and the technological advancement is certainly trying to match pace with the disruptions.
Tags: cyber solutionsSecurity budgetTechnology Risk Management
The post HCL Roundtable: Securing Endpoints and Ensuring Compliance appeared first on CIO Tech Asia.
]]>The post Deloitte Roundtable: Extinction level events: how ransomware has changed disaster preparedness appeared first on CIO Tech Asia.
]]>
If you were to ask any cybersecurity professional about the most alarming malware trends today, chances are ‘ransomware’ would be one of the first things they would be keen to talk about. This type of malware can aptly be described less as a bug or virus and more as a plague, with a potential to infect massive enterprise IT systems, encrypt everything in sight and subsequently extort its unsuspecting victims. We recently caught up with James Nunn-Price, Asia Pacific Cyber leader at Deloitte, to discuss ransomware trends and how to get prepared for extinction-level events.
Ransomware campaigns have caused some serious damage over the last few years and have targeted a range of organisations from large enterprises and public sector institutions to smaller, family-owned businesses. However, when it comes to the sheer scale of ransomware attacks – and even malware in general – it really doesn’t get much more damaging than the NotPetya ransomware scourge of 2017.
One organisation that famously felt the brunt of the NotPetya ransomware was Maersk, a shipping and logistics firm, which holds the crown as the world’s largest container ship and supply vessel operator in the world. The multi-billion dollar company’s considerably large IT infrastructure footprint was blitzed by NotPetya, completely disrupting its core business processes for days and resulting in hundreds of millions in lost revenue.
Thousands of organisations around the world have been sorely impacted by a wide variety of ransomware strains and perpetrators. The WannaCry ransomware strain hit more than 230,000 computers around the world causing massive disruptions to Spanish mobile operator Telefónica as well as the UK’s National Health service. Looking more recently, a ransomware attack on Danish facilities management firm ISS World left hundreds of thousands of employees unable to access company systems, while causing an estimated $75 million to $112.4 million in total damages.
It is no secret that ransomware attacks are becoming more common, as adversaries and tools become more advanced. Cybersecurity researchers have found a huge increase in the number of ransomware attacks during 2020, suggesting a seven-fold rise in campaigns compared with 2019. Additionally, the severity of attacks is also on the rise, with both requested ransoms and cost of disruption expanding considerably.
Importantly, these attacks don’t just have the capacity to rob organisations, they can also create extinction-level events which become extremely messy and hard to recover from. Thus, organisations need to have the right tools and processes in place if they wish to be adequately prepared for such events.
NotPetya is considered by many as the worst business-breaking cyber event to date. The malware was released into the wild as part of a coordinated cyberwarfare campaign carried out by what was likely a nation-state hacking organisation, against the Ukraine. As a means to accomplish unrest in the country, the perpetrators of NotPetya attacked a small organisation called M.E.Doc, which develops tax software for the Ukrainian government.
M.E.Doc was the perfect target for a novel strain of extremely virulent malware, as its software is used by just about everyone to pay taxes in the Ukraine, providing a massive victim base for any payload. Using a pair of severe Windows zero-day exploits, the hacking group proceeded to install backdoors in M.E.Doc’s June tax code. This allowed it to upload a new kind of ‘ransomware’ – NotPetya – to anyone who updated their M.E.Doc tax applications.
No one could have ever foreseen the kind of widespread damage this was going to cause, not even the perpetrators themselves. The malware was designed to spread automatically and indiscriminately through large-scale networks, gaining privileged access before rapidly moving laterally to encrypt every device or server it touched. It took out swathes of digital-based services and core business processes for a massive number of Ukrainian businesses, causing a substantial amount of unrest in the country.
While it appeared, in the immediate term, that this was just a wide-scale extortion attempt, it was later found that so little effort had been put into the actual ransom part of the ransomware (with a measly $10,000 being collected in total) that this was clearly not the objective of NotPetya. After security researchers chipped away at the strain and reverse engineered it, it was clear that encrypted files could never be recovered. The malware was purely created to disrupt life and business in the Ukraine and nothing else.
However, the attack caused havoc to a wide variety of businesses, even beyond Ukrainian shores. As it spread without intervention, multiple international organisations that used M.E.Doc got caught up as collateral damage, including multiple hospitals in the US state of Pennsylvania, FedEx’s European subsidiary TNT Express, and even a chocolate factory in the Australian state of Tasmania.
It just goes to show that no matter what industry businesses operate in and regardless of what your perceived threat level may be, you are really never safe from a potential extinction-level cyber event. Additionally, and in the case of NotPetya, cyber events can decimate IT systems to the point where the loss of IT really means the loss of the entire business.
It may not be completely accurate to characterise NotPetya as ransomware, although it does carry most of the same DNA, at least in terms of its impact on business functions. Just as in the case of NotPetya, ransomware can be utterly devastating for businesses, pulling core business-critical IT infrastructure offline and encrypting highly sensitive company or customer information.
James Nunn-Price says there are a range of factors – applicable at enterprises around the world – that can lead to ransomware attacks being so potent. Some of these are:
“In every circumstance where we see big ransomware distributed, there is some sort of mechanism that gets an attacker in from the outside,” Nunn-Price explains.
“There are some themes around what those are, but eventually what they’re trying to accomplish is to get into the Active Directory (AD) backbone and weaponise it.”
“Generally it is very easy to move laterally into the AD environment, find some sort of nested group, and then travel up that nested group in order to get to higher level privileges in the AD. It’s always the same result.”
There is a distinct lack of preparedness amongst many organisations when it comes to what they should actually do in the event of a crisis. Disaster recovery is one thing, but what exactly can organisations do when there is no IT, or when the vast majority of their IT systems have been taken out?
That is why an effective approach to crisis recovery planning – as opposed to disaster recovery planning – is so important and this tends to be a sore spot for many enterprise organisations. It is less about bringing computer servers back online and more to do with developing a plan that brings services back, especially in an environment where an attack brings your entire business operations to its knees.
This is an important distinction, according to Nunn-Price, as no number of servers necessarily means the business will be able to function.
“That’s what the business provides. They don’t provide servers, they provide services,” he continues,
“What are all the pieces of technology that impact core business processes? How do you recover those clusters so you can have those business processes back? That’s something that very few organisations have spent the time to do, but it’s really not that difficult.”
The recovery effort also needs to occur with the whole business backing it and supporting it. That partnership between the wider business and IT is so crucial, as the business needs to provide guidance and approval regarding which critical services need to be brought back, and in what order.
When it comes to extinction-level attacks, it might sound like a logical solution to invest heavily in preventative measures to avoid any incidents ever occurring in the first place. Although this can be a tall order, as it is difficult to keep pace with the sheer volume and sophistication of threats today, especially if you do not operate a business directly associated with IT (in terms of core business function).
This is why Nunn-Price recommends focusing investment on recovery itself, in order to facilitate a much faster remediation rate, while ensuring that core business processes can get back up and running more quickly. That does not mean completely ignoring important preventative measures such as streamlining the AD in order to prevent privileged access abuse, or installing endpoint detection and response (EDR)/privileged access management tools.
These steps are certainly still important, as is having a robust approach to security posture. However, keeping pace with nation-state/organised crime attacks is hugely difficult and requires huge amounts of attention and ongoing investment to bring your environments up to military-grade.
“You will never keep pace with the adversaries that you’re dealing with as an average enterprise operating in a standard industry, and you shouldn’t. If you spend that much money you’re going to lose revenue to other companies because you won’t be as competitive anymore and you’re going to stop yourself from operating effectively,” Nunn-Price concludes.
Of course, getting this balance right is always going to depend on your specific business and there is nuance to developing a comprehensive cybersecurity plan. Although, regardless, there is certainly a lot of value in ensuring that when everything goes belly up, you are extensively prepared.
Tags: Application Securitycyber attackCyber BreachRansomware
The post Deloitte Roundtable: Extinction level events: how ransomware has changed disaster preparedness appeared first on CIO Tech Asia.
]]>The post Lumen Roundtable: Transformation Today to Control Tomorrow appeared first on CIO Tech Asia.
]]>Sponsored content: Wednesday, 2nd September 2020 – Singapore
Focus Network, in partnership with Lumen, brought together leading Singapore-based IT executives from both local and international enterprises to discuss important Industry topics, including:
The impact of COVID-19 on their businesses, and the new Future of Work to rebound from the effects of COVID-19
The session was coordinated by Tyron McGurgan from Focus Network and providing great insights to the discussions were experienced strategy and thought leaders
Chris Levanes, Director of Solutions Marketing and BizOps, Lumen who is a veteran of the ICT industry with 20+ years of experience and has held numerous regional executive roles with Industry leaders – Red Hat, Microsoft and Hewlett Packard.
Chris Rezentes, Director, Product Management (Network), Asia Pacific, Lumen who leads the regional product strategy, P&L, and roadmap for Lumen’s entire network portfolio, and has extensive experience overseeing the strategy and expansion of fiber networks across APAC.
Levanes began the roundtable by reflecting upon the significant disruption that has occurred across the macro landscape. “A little over 8 months ago, most of you were probably focused upon Digital Transformation (DX), as was evident by data from the IDC FutureScapes survey of global CEOs. Which meant that to some degree or another, either you, your department or your organisation was likely looking to invest or continue upon DX initiatives in 2020, to ensure you remained competitive in the new digital future landscape. However, with the advent of the COVID-19 pandemic, it can be overstated how pervasive the impact has had upon our social structure on a daily basis.”
Levanes continued, “At the same time, the business impact to COVID -19 has been just as severe. Entire countries and economies have been subjected to government mandated shutdowns, resulting in new workplace health and safety regulations, restrictions on travel and business interactions, and so forth. While every organisation and industry vertical are different, generally the impact can be characterised across the 3 common areas: 1) How you engage and interact with your Customers, 2) How you leverage and do business with your Supply Chains and 3) How you maintain your Operations to keep the business running and the lights on.”
The roundtable delegates then shared some of their experiences of how their organisation have been impacted in these 3 areas:
“From a manufacturing industry perspective, what we missed most is the interactions and face to face meetings with our customer”, notes David Tay, Chief Information Officer, Beyonics International. “One of the most important aspects of our business development is when customers visit our facility and we are able to show the skillsets we possess, the technologically advanced machinery we use to produce components for them and are also able to show them samples of past products and projects, and therefore impress them with our capabilities. But with COVID-19, we are unable to invite customers to the site, especially when the majority of our customers are internationally spread outside of Singapore”.
Yan Zhang, Vice President IT, ST Engineering stated that “the impact [for her organisation] was mostly with their supply chain, as materials from overseas were impacted, which then impacted their customers. To minimise disruptions, the procurement team engaged in lots of follow-up virtual meetings with their supplier networks. Initially there were challenges with VPN and staff getting used to working virtually, but those have now been ironed out and they are settling into a good momentum with the new ways of working remotely.”
The Financial Services Industry is a highly regulated sector where adherence to regulation and governance of business continuity, security and privacy are critical. However, for some organisations, business continuity planning was not sufficiently setup for the challenges they encountered from the pandemic, and those BCP had to be quickly expanded to address the unique circumstances. One delegate shared insight on how his FSI organisation overcame their challenges… “We look at Asia and the emerging markets as a significant growth catalyst for the business globally”, says Chris Bezuidenhout, Chief Information Officer, Deutsche Bank, “and a large proportion of our growth targets for the year were centred around our ability to expand the client base and increase trading volume in emerging markets in Asia. Most of these interactions work when orchestrated face-to-face, as they work on a trust basis; however our sales teams have been completely dislocated by the environment that we found ourselves in over the course of the last 9 months.” Bezuidenhout continued, “This prompted them to invent ways and solutions to bridge that gap. To rely on the cold interactions on a zoom call or just a call itself is much harder to achieve and to build trust. Especially when interacting with corporate clients, and when trying to sell services to them. That was by far the biggest challenge that their organisation faced. There are certain procedures and policies enforced to prevent data leak and [data] being transmitted externally. And that also makes it difficult for teams to interact in the way that one would expect. Hence, they resorted to innovation and made changes to integrate apps like WhatsApp and WeChat into some of their localised chat solutions, where they had security protocols enacted. So, it has definitely been a challenge to tech organisations to cater [for] solutions for the business to succeed at this point of time.”
“The infodemic is growing faster than COVID-19”, says Soon Tien Lim, Vice President IT, ST Electronics, “Most of our concerns with our employees working from home was the management of the ‘infodemic’ versus the real world and this was seen during SARS outbreak as well. Cyber-phishing and cyber security have been the big concerns in this new landscape.”
Rezentes also shared the experiences of Lumen. He indicated that “Lumen had received many requests from customers to help them with their immediate requirements of quickly enabling a remote workforce to ensure operational continuity.” Rezentes elaborated that “for a number of our customers, these conversations centred around redesign of the network to better support the connectivity and performance of a much higher than anticipated remote workforce.”
Looking at the future of connectivity, Levanes stated that, “Pre-COVID, the Network had become a foundational platform for transformational technologies. Post-COVID, Network Connectivity is even more vital to successfully rebounding to the ‘new normal’ of work.”
Rezentes elaborated upon this area by saying, “For many organisations who have successfully implemented Work From Home (WFH) strategies, a good portion are stating that it is unlikely they will look to fully bring back their workforce until sometime next year. Moreover, having already invested in technologies and processes to maintain their business operations throughout the pandemic lockdowns, many organisations are realising they can reduce costs by having a portion of their workforce remain working from home permanently. Some early market estimates indicate that many end up being as high as 30%. Hence the focus for many organisations is now upon how to securely and effectively optimise their network connectivity to accommodate the requirements of the post-COVID landscape.”
“We are all-in on the cloud for everything we use”, mentioned Steve Ng, VP, Digital Platform Operations, MediaCorp Pte Ltd. “As part of our digital transformation journey we are trying to engage more with customers and [are] focussing on how to increase traffic to our sites and apps to make the user experience better and incorporate measurements to understand our audience. With COVID-19, everything is now going to digital, people work from home and the content has to be more general across a wider audience. In the past, peak time [for us] was either during early morning commute hours, lunch time and after office hours, which is when people are off-work and starting to consume the content on our properties. But now it is a flat line from 7AM to midnight, as people are consuming content all the time.”
“We did an 18-month migration project to migrate our on-prem ERP to a cloud-based ERP,” shared David Tay, Chief Information Officer, Beyonics International. “Having heard the other delegates talk about the challenges of COVID-19 and work from home scenarios where there have been issues accessing the VPN, we were quite fortunate that we had already migrated the 4 countries operations to cloud from January this year.” David continued, “…so it was less of an issue when we had to work from home, as lot of our core applications like email, messages, conferencing, ERP are all out there in the cloud already. The challenges we had, which we shared with the management, was the fact that when working from the office, we only needed to secure ‘two doors’, but now with around 300 employees working from home using the corporate network, there are potentially 300 ‘open doors’ where viruses can creep in. And this has been our biggest challenge in recent times.”
Rezentes emphasized that, “SD-WAN has emerged as a significant architectural response to the need for increased WAN efficiencies in organizations and for optimizing the end-user experience across a plethora of public and private cloud applications.”
Maita Cabinian, APAC Regional IT Director, PZ Cussons Singapore Pte Ltd. shared her experiences, “One of our biggest challenges was that we signed up a contract with a 3rd party provider to revamp our network and we were supposed to go full scale implementation by start of this year transitioning our global network to the new provider. But we got into the pandemic and all travel were stopped and so did the deployment. In the meantime the business changed quite a lot , we closed several of our depots and we worked more virtually with our customers, supply chain partners and the whole organisation moved to a digital working so that changed the previous assumptions we had for our network transformation which were no longer applicable. So, we had to pivot and change our strategy and come up with a combination of traditional MPLS and SD-WAN setup. For the big sites where we needed to ensure connectivity was at its most reliable, we continued with the MPLS journey there and for the other smaller sites were implemented with SD-WAN. “
“Our consideration is to adopt more XaaS as it is already built in with the application security for the internet but we also are trying to make sure that at the user side, we deploy EDR (Endpoint Detection Response) basically monitoring the device; and obviously all this requires investment and one of the things we need to balance this with is the network costs with SD-WAN and needs a restructure”, explained Justin Ong, IT General Manager, Panasonic.
Further into the roundtable, there was a deep discussion on the transport layer, and it was agreed that the flexibility that SD-WAN brings to the table allows organisations to access their important and critical applications across a secure VPN network, and then also take advantage of an IP Network for their other application usages.
Another key topic of discussion that occurred was whether to inhouse or outsource any network transformation or SD-WAN adoption initiative. Rezentes indicated that this is often a commonly recurring consideration for many organisations, and shared “often many organisations look at outsourcing providers as a way to focus upon what’s right for their business, and hence, leave the implementation of SD-WAN and management of the network infrastructure to a managed service providers. Furthermore, we frequently find that when customers are engaging a service provider for their transformational connectivity, they are also likely to be seeking assistance to secure their environments.”
Levanes wrapped up the session by saying, “The pandemic has underscored the importance of digital transformation in the eyes of many CxOs. They now find themselves at a decision point – to follow the same course of cost cutting as previous recessions have dictated – or to flatten their own organization’s recessionary curve by leveraging technology, in which to transform and quickly rebound to the next New Normal.”
He continued, “To underpin the return to growth and strive to modernise their processes and value chains, we find that they are seeking to engage strategic technology partners that are going to help them evolve their business to the future digital landscape. Lumen has a longstanding history of serving some of the top companies in the world, and has established a strong position to support organisations on their connectivity and transformation journeys.”
This brought to conclusion the interactive session with participation from the delegates and great discussions facilitated by Lumen. Focus Network facilitates a data-driven information hub for senior-level executives to leverage their learnings from, while at the same time assisting businesses in connecting with the most relevant partners to frame new relationships. With a cohort of knowledge hungry and growth minded delegates, these sessions have seen imparting great value for participants. With the advent of the new ways of working remotely, Focus Network continues to collaborate with the best thought leaders from the industry to still come together to share and navigate the ever-changing landscapes that’s barrelling into the neo industrial revolution.
About Lumen
Lumen is guided by our belief that humanity is at its best when technology advances the way we live and work. With 450,000 route fiber miles serving customers in more than 110 countries, we deliver the fastest, most secure global platform for applications and data to help businesses, government and communities deliver amazing experiences.
Tags: Business AgilityCenturyLinkCloud ServicesCOVID-19lumenremote working
The post Lumen Roundtable: Transformation Today to Control Tomorrow appeared first on CIO Tech Asia.
]]>The post OutSystems Roundtable: Disrupt or be disrupted – How can utilities accelerate momentum towards resiliency, automation and digital innovation appeared first on CIO Tech Asia.
]]>Focus Network, in partnership with OutSystems, brought leading IT executives from across the Australian utilities sector, in a conversation to discuss how utilities can benefit from a rapid development platform for core transformation and digital opportunities all along the value chain.
Key takeaways:
The session was moderated by Blake Tolmie, Director of Operations, Focus Network and providing great insights into session were experienced strategy and thought leaders Paul Arthur and Claus Etienne.
Paul Arthur, Regional Vice President, ANZ, OutSystems – Paul has spent over 20 years working with IT solutions vendors and partners across both Europe and Asia Pacific. He has a passion for improving the customer experience through the power of IT to drive the business value for his clients. He has worked in several different industries and companies from global players to local start-ups, building businesses and teams that focus on customers’ needs and value drivers above all else. He was employee number one at Cherwell Software APAC and most recently was General Manager at Australian start-up Data Republic.
Claus Etienne, Utilities Solution Project Lead, DBResults has a role focusing on helping organisations on their needs and delivering solutions utilising Agile approaches. He has over 22 years’ experience in the utility sector across 5 continents working across almost all sectors in the industry.
Utilities face infrastructure challenges as well as customer challenges across both retail and commercial businesses. In this session, there is a good representation of a combination of utility providers from electricity, water to petrochemicals. An interesting observation is that the focus in the energy industry that started off 15-20 years ago when they deregulated Victoria, the same pressures are being seen currently in the water industry. There seems to be a cycle of evolution passing on from one sector to the other.
Based on Accenture’s Research Disruptability Index 2.0 in 2018, the propensity for disruption within certain industries in the utility space, as markets are moving more towards that upper right quadrant (Volatility quadrant), the propensity and susceptibility for future disruption as well as the current level of disruptions are getting higher and higher. You can see electricity is higher than gas or water, but everything else seems to be heading in that direction for disruption. Currently there are 92 electricity retailers in Australia in the AER (Australian Energy Regulator) website. The energy sector is the next sector scheduled to have CDR (consumer date rights) applied in Australia resulting in more disruption and competition when it comes to the customer side of things. That’s not to say that a lot of the disruption and transformation hasn’t already occurred. Across the utility industry there has been a huge step towards IoT, Smart Meters, Smart Grids, Big Data Analytics across all sectors have had significant investments done in those areas to help improve the deliverables, infrastructure or the efficiency of that infrastructure. We see many of these as enabling technologies rather than differentiating technologies. It’s how you are going to differentiate going forward which is going to be the biggest challenge in the disruption phase.
“At OutSystems we talk about this as disruptive transformation rather than digital transformation” says Paul Arthur, Regional Vice President, ANZ, OutSystems. Digital now is a given and a recent phrase says that ‘every company is a software company’. The reality is that there is a software element in every organisation. So disruptive transformation is where organisations have started to focus on how they want to become masters at disruptive transformation. What we are seeing is that organisations who can master the active disruptive transformation, are taking that act across multiple industries. Their ability to disrupt is transferred across multiple industries like UBER moving from taxis to food delivery and many of the Telco organisations going from Telco to provisioning or retailing electricity. They have perfected the art of disrupting a particular industry and they are then applying the same mastery across to others. Being ‘born in the cloud’, this has been easy for some of them, with no legacy or history and not tied to a technology stack that limit them. They can use the latest and greatest of tech that allows them to actually move fast. Most organisations aspire to have that agility of performance and most people are somewhere on that curve where you are aspiring, embracing, evolving or mastering the art of disruptive transformation.
We can see that split across several layers. It starts off with a curiosity and then adoption of some of that technology and eventually you become innovators of that. But the key area of this transformation, where most of the organisations in the utility space are, is the difference between being internally focussed or externally focussed. Internal focussed transformation is being more efficient in your delivery, asset management and managing your infrastructure more efficiently. That can help bring down the cost base, making you more competitive in the market. What we are seeing with organisations that are mastering disruption is that they are focussing externally. They are focussing their technology and software into how they engage around the customer experience in the customer journey and how they build relationships in those aspects.
Change in thinking from being a pure service provider to more value adders to customers is important. By advising customers on how to save energy or water by having a smart showerhead or turning off the heater etc., all of which are varying levels of engagement. Making sure that you understand your customer and actually engaging with them in the appropriate manner and talking to them in the way that they want to be talked to. We have various personas like people who prefer to use chat over a phone call, so it’s of paramount importance to engage with those various people effectively through their preferred channel.
This is now about creating a personal relationship. Up until now utilities have been like a commodity, always on and always available, and you expect that when you hit the switch or turn on the tap. You expect it to deliver and the sector has done a great job of actually doing that and in effect has made it look easy and simple for the people to receive the service. The challenge created by doing that is making what you deliver very commoditised and hence the differentiations can only come in two ways; either differentiate on price or on experience. Both of those dynamics have different character sets in the perspective of growing a profitable business. Deloitte talks about creating a personal relationship in this concept of a smart home platform, of engaging with your customers on how and when they want to be engaged with and with the content that they want to see and nothing else. The quality of the product you deliver doesn’t change but how you deliver that and the experience they will have through that delivery will often influence their decisions and driving conversations.
“With water utilities in Victoria or anywhere in Australia, differentiating the offering is not necessarily the driver as we are a monopoly”, noted Amanda Finnis, General Manager Information, Digital and Cyber (CIO), Coliban Water, Victoria. “However, the external facing aspects we are having these conversations about, go well beyond the customer being external but also looking at smart partnerships that we can engage in, as it’s no longer just about providing the water service or core services but it is looking at things like what kind of information do we have or data that we have with which we can partner with other people and do smart things at the community level and an environmental level. And I think it goes well beyond as just the customer being external.” Coliban Water are also talking about going further, as the strategic direction is very much about prosperous communities and green environments for communities and less about that corporate vision. We also have a strong commitment to reduce carbon emissions. Data we collect that is rolled out of the metering could do interesting contributions for town and regional planning.”
Another example of using data for supporting customers is one of the current customers that OutSystems has been working with, WaterNSW. They have been looking at the best way to add value to their consumers like the farmers, using the data they gather on a daily basis. The data from BoM (Bureau of Meteorology) combined with their own can be potentially used to help guide farmers on a better time to plant their crop. In instances where the rain condition looks suitable, advise them to plant something two weeks earlier because if they get the produce to market two weeks earlier, they might be able to get a premium price for it. “And it’s those sorts of things that results in having a far more interactive and engaged position and create a good level of trust with the consumer”, reiterated Claus Etienne, Utilities Solution Project Lead, DBResults. “Most utility providers realise that they have a huge amount of valuable data. It’s about how you free that data or how you connect your disparate systems and bring those together, integrate and share that information in an interface or in a point where everyone can make sense of it.”
“Even though we are a monopoly here and don’t have direct competitors, we have indirect competition in other ways like Solar PV voltage systems, Smart Grids etc.”, commented Vibin Vijayan, Applications Services Team Lead, SA Power Networks. “Obviously we have rich data of customers from a long period of time from meters and we know their consumption trends. Also, with solar power, we can track how much we are putting back to the grid and those kinds of information but at this stage we are not sure if we can legally commercialise this or not. At the same time, we can analyse this data to create a custom kind of product. Even when you subscribe to a new electricity retailer, they can come up with a plan based on your consumption patterns which results in a personalised connection. As we are not able to predict future issues, in the event of such unforeseen instances occurring, being ready with our historical data will help to adapt and move to any network or form in quick time. Data hence plays a vital role in future direction and decisions.”
Energy companies could be proactive in examples like; if there is a prediction for a heat wave, notifications via an app saying things like ‘can you please turn your air-conditioning down’ can assist in controlling the situation better. At WaterNSW, the major water storage for Sydney metro is the Warragamba Dam, which filled over its 100% capacity and had a spillage. Therefore, they had to tell the downstream people early that due to the spillage you are not allowed to do recreational activities in the river. The BoM had a range of weather predictions and if you get the upmost level for rain, 300 houses are expected to get flooded. They had no way of contacting those people directly and had to go via Sydney Water and basically do flyers. While that is not an everyday event, a similar kind of incident occurs across all the utility providers where we have outages or severe weather events. While there is an outage, there is no way of communicating with those customers. It’s about how we effectively engage with the customers and what do we actually do with them. The passive way is to have a website up and that is not best way, but other means like an SMS alert or other ways of connectivity with the customers to share vital information.
“While New Zealand is a smaller market, there are about 40 energy retailers out there. We are one of 5 generator-retailers and for us digital transformation is front of mind from a customer journey perspective, says John Montgomerie, Head of Portfolio and Project Management, Mercury, New Zealand. As New Zealand has gone into a lockdown position, the reliance on using the digital channel to engage with our customers came to the fore. Things like our chatbot usage and use of our mobile app ‘Mercury Go’ skyrocketed and hence we are recognising that. We have realised those as the most suitable channels of engagement and continue to strengthen it into the future. While they are competitive differentiators, it is a very competitive landscape that we are playing in and looking for a point of differentiation for us amongst our peers is always a challenge. So, the focus is on how we add value to our customer experience, and while sometimes it’s a race to the bottom, we need to know how best to manage that and maintain that balance.”
To manage customer hardships with payment, Mercury provided a free ‘hour of power’ and responded pretty quickly to the market with that. This went down really well with their customers. They also recognised that there will be payment challenges for the customers into the future and are conscious of the winter lockdown and of disconnections or deactivations which could result from that. By ensuring their digital channels were activated and calls coming into call centres are able to respond appropriately in those circumstances, they maintained a streamlined plan for consistent communication with customers. As with a lot of organisations, their customer engagement team was also working remotely and fortunately had the infrastructure to support that.
“The story that makes me proud to work for SA Water, says Christine Rootsey, Enterprise Technology Architect, SA Water, is that not only are we providing discounts on bills for anybody that needs it, but also have examples of people not able to afford to put food on the table (due to the impact to their income) and we could send out $50 vouchers to contribute to buying groceries. The long-term benefit and the goodwill out of that far outweighs the $50 given to that customer. By having the data on our production; we can predict demand and but being able to control demand is the utopia and that would result in a win-win situation when we know our customers and profile them and offer them advise and offer discounts based on the demand and supply trends at various times in the year.”
Amanda added that another notion they are exploring is the peer to peer water sharing, by cutting the provider out of the loop. The rural sector is able to do that better than the urban environment. “The question is how you establish a system that is beyond trading and is about peer to peer connections. With the ability for consumers to put power back into their grid, or putting rainwater back into the water grid, there is another level of health consideration that we need to solve for that. It’s still a new area to explore.”
This is not just for the water utilities but even for energy ones as the example in South Australia where AGL have put out the notice that people can actually trade their energy or donate it to their friends or charities. Thus, the tools and capabilities of making peer to peer engagement happen is going to come up especially where these are scarce resources and trying to manage demand.
What we can see in other markets which overtime will merge into the utilities too, is that the ideation stage is relatively simple and it’s the execution stage which is hard. The speed of the ability to execute is key and the ability to accelerate innovation, fail and iterate again is going to be important on that journey. Utilities are things that people really need, like electricity and water and cannot do without the supply of, but you can ideate around the experience of those things. Like the experience of moving to a new house, the experience of signing up for a different package, experience of the type of package that you are taking to market etc. Some people may be happy to pay a certain fee for the whole year and use as much as they like. While other people would prefer to be on a pay-as-you-use model. Unless you have the technology platforms to try these things and build them, you will always be chasing the market.
Talking about customers with whom we are working with in the Utility sector, EDP, electrical supply organisation in Europe is a great example of success. They were very serious about customer satisfaction and the net promoter score. They had a very different perception to the services they were delivering to customers compared to what the reality was. As we know, we are only as good as our last interaction with our customers. People expect everything to work perfectly but only remember when things don’t work. They were struggling with the customer perception of their services and they used OutSystems to build a new portal which they called ‘Easy4u’ and by doing so, increased their customer satisfaction percentage by 17%. We therefore build something simple and effective to raise that customer experience. While the provision of electricity or water into homes is seamless, and it’s a shock when it doesn’t work, and the interaction with people when that happens is what actually sticks with people. Thus, the better customer satisfaction reduced the customer churn in effect. They used that platform to build their entire billing process by replacing their outdated legacy system saving a huge amount of funds on their maintenance bills and have a much agile approach to collecting their revenues. So, they not only enhanced their customer digital experience but also improved their digital core.
Another customer in Netherlands, ENGIE, build their trading platform from scratch to going live in 6 months with just 6 developers and created a trading platform across their customer base to actually be able to balance and nominate the allocations of their resources. This is something they build really fast. What we are finding today is that customers are using our platform across the utility space to innovate and to execute innovation. They often had ideas about what they wanted to do but they struggled with the execution of it and OutSystems allows them to do that execution as we go forward.
We covered various themes on driving efficiency or creating new and innovative ways of doing business, and during this time of COVID, we have to be innovative and efficient at the same time”, said James Harrison, Solution Architect, OutSystems. “The challenges that you face in implementing those ideas can range from how you actually build those experiences. You might need to build web apps, mobile apps, might need to share date through APIs, to turnaround things quickly to respond to the market etc. The OutSystems platform is designed to assist you to overcome those challenges all at once.”
Being able to build an application across the full stack with a single developer, building up a data model, doing integration and doing the user interface or the visual way of working; that’s what the platform is all about. And how you write software and build software becomes much easier. We are also able to integrate with any systems, even if there is a legacy system, while that is not the utopia that we want. The platform is designed to integrate with any technology that you have without need to write any code, you can do it in an efficient way. Moving into the lifecycle model, from development to test in an easy way, without affecting your end customers, managing the whole lifecycle and not just the initial build. And finally, the types of experiences you can build on the platform from web apps and mobile apps, with really beautiful looking experiences all the way to things that don’t actually have user interfaces like middleware or APIs and transfer of data, or chatbots and non-tangible experiences like SMS and email. All this is put into a platform that is designed to scale and has built-in security as well. This can also be run in a managed cloud environment, where we manage all the infrastructure for customers, or it can even be deployed in on-prem infrastructure too. Which makes it very flexible in terms of deployment options.
We are helping a lot of organisations in the Utility space, some more legacy while some are disruptors, to create amazing solutions for their customer and internal business.
This brought the conclusion of a highly interactive session with participation from the delegates and great discussions facilitated by OutSystems. Valuable insights were shared within the group across several areas of the utility sector. Focus Network facilitates a data-driven information hub for senior-level executives to leverage their learnings from, while at the same time assisting businesses in connecting with the most relevant partners to frame new relationships. With a cohort of knowledge hungry and growth minded delegates, these sessions have seen imparting great value for participants. With the advent of the new ways of working remotely, Focus Network continues to collaborate with the best thought leaders from the industry to still come together to share and navigate the ever-changing landscapes barrelling into the neo industrial revolution.
Tags: Digital PlatformIoTlow-code platformOutsystemsSmart MetersSystem Integration
The post OutSystems Roundtable: Disrupt or be disrupted – How can utilities accelerate momentum towards resiliency, automation and digital innovation appeared first on CIO Tech Asia.
]]>The post Lumen Roundtable: Enabling Global Media Delivery in Challenging Times appeared first on CIO Tech Asia.
]]>Sponsored content: Thursday, 10th September 2020 – Australia, New Zealand and Singapore
Focus Network, in partnership with Lumen (formerly CenturyLink), brought together leading executives from media, entertainment and education high-profile brands to discuss important Industry topics of:
The session was coordinated by Tyron McGurgan and Blake Tolmie from Focus Network, and providing expert insights to the discussion were experienced thought leaders – Chris Levanes and Gautier Demond.
Brief introduction of the speakers
Chris Levanes, Director of Solutions Marketing and BizOps, Lumen Asia Pacific is based Singapore, and his team is responsible for driving Solutions Marketing, Pricing Strategy, Product Program Management and BizOps initiatives across Lumen’s Adaptive Networking, IT Agility and Connected Security and Media & Content businesses.
Gautier Demond, Director, Content & Media Practice, Lumen Asia Pacific started working in the video streaming industry in 2003 for Nintendo to set-up their video encoding department, and currently leads the regional Content & Media product strategy, P&L and roadmap for Lumen’s portfolio of CDN, CDN Mesh, CDN Edge & Vyvx products.
One of the key objectives of this roundtable was to discuss the significant disruptions that have impacted businesses in recent times. Chris Levanes began the discussion by “reflecting upon the representative content delivery KPIs formany organisations last year. These KPIs likely included revenue growth, improving customer experience, mitigating security issues, improving performance of SaaS applications, achieving cost savings, to name a few. However early into this calendar year, many of the KPIs for most organisations abruptly changed due to the impact of COVID-19. Some were de-prioritised, whereas others became more prominent, in which to help mitigate or capitalise upon the macro conditions occurring from the pandemic”.
For example, Industry data was shared with the attendees that showed an exponential increase in content streaming and delivery demands across video streaming and social media platforms, as a direct consequence from the stay-at-home orders that were put in place around the globe. Similarly, user consumption behaviours also showed significantly variations to pre-COVID patterns due to the new work and social circumstances.
“Another area that COVID had a sizeable and unanticipated impact was that many governments had to mandate or negotiate with streaming and gaming providers to help reduce network congestion that was occurring in many countries”, noted Levanes. Specific to Australia, there were sessions conducted by the Australian Federal Communications Minister with the top streaming providers in which he asked them to reduce their bitrates, and similarly approached the gaming industry and asked them to volunteer ways to reduce their load upon the broadband network. Likewise, Government directives occurred in other countries – New Zealand, US, and Europe – were shared as examples.
It was also discussed that the increased consumption of content had a corresponding negative influence upon the relationships between CDN peering partners, content providers and ISPs (internet service providers). This tension arose from the conflicting pressures to service their respective customers, including last-mile congestion stemming from many people working from home or furloughed, and was exacerbated by the need to guarantee access to what was now deemed to be essential services. Furthermore, the demand was creating considerable equipment and network fatigue, as well as pressures upon support teams, and in the event of a hardware failure, COVID-19 made it very challenging to conduct on-premise maintenance due to the movement restrictions that were in place. All of these factors resulted in an unusual prevalence of Cloud, CDN and network outages, for services which were previously considered relatively reliable.
The roundtable attendees then shared insights on how their organisations had adapted to the new landscape.
“One of the things for us is the opportunity to increase our audience at the rate of change and kinds of behaviour [changes] that the pandemic has caused”, says Will Everitt, Director, Product Solutions, Seven West Media. “I was on a call with ITV a couple of weeks ago and one thing they had noticed was changes in behaviour for certain demographics. Linear TV viewers of a certain demographic have shifted now to streaming in Europe and UK. Their major concern is that, those viewers won’t be coming back. That means, there needs to be more emphasis or focus on streaming for us as a video company as well. This was from a domestic market perspective, so the focus is to be able to scale within this kind of a market.”
“A lot of what we do is advertising support and VOD (video on demand), we don’t sell anything directly to our viewers”, shared Will Cross, Head of Digital Operations & Support, Seven West Media. “We saw a huge increase in the number of people coming to our service. We were already trending up steadily, but it was a very noticeable spike which then have implications on things like CDN and the actual cost of delivering that content. But what has also happened in the market, especially in the advertising support markets, is that the total pool of advertising revenue across the industries dropped as a lot of companies who are in a cost saving mode aren’t spending much on advertising and marketing. This has put us in this interesting situation where we are thinking outside the box on how we are going to service more consumers than ever before in a way that our base cost of delivering those services are reduced. Things like looking at the level of bitrate at which to deliver content serves two purposes; appeasing the request of the regulatory bodies who want us to do our part in reducing congestion on the networks whilst also in the same token bring the cost of delivery down. So, it was more about how to make the delivery of our service more efficient so we can maintain that uptime, and that has been the biggest journey we have undertaken in the last few months and it has been an interesting one.”
“We are an audio business and we stream radio and podcasts and with COVID we had an immediate shift of people’s consumption patterns. While they used to consume audio during their commute via the FM podcast, and now that people don’t have radios at home, we have seen a shift to ‘smart speak’ via Alexa or Google Home and 25% of all of our streaming came via smart speakers”, says Fayad Tohme, Chief Digital Officer Nova Entertainment. “Behaviours and consumption changed dramatically, and we shifted from an FM broadcast to an internet service. So, internet became a core channel when people are in lockdown. We had to drive that awareness and make sure, not from just an operations perspective to ensure stability but also from a marketing or promotional perspective to make sure people know where to consume. That education piece was core for us and also reporting back to the market around these consumptions and stability.”
“Our key business area is teaching and learning and that was hugely impacted. One of the things that we pride ourselves on is the teaching experience on campus and with the onslaught of COVID, we had to change our priorities and investments and focus on remote teaching and remote learning options”, said Sanit Kumar, Infrastructure Services Portfolio Manager – Cloud, Network & Datacentre Services University of Auckland. “There was also a cultural change element because some of the academia were not used to teaching remotely. So that change element was significant where they were well supported to manage those new ways of teaching and creating content around online teaching and services offered by the University. As far as the content delivery is concerned, there was an actual reduction in internet traffic on campus from a local consumption perspective but then we saw a huge traffic increase in our digital learning platform which we were already in the process of modernising pre-COVID. We also had challenges with students stuck overseas and we managed to do some workarounds to get them onboarded digitally to get the similar experience that they used to have on campus.”
Gautier Demond shared a vendor’s perspective by saying “from a Telco and CDN standpoint, our business has also been notably impacted. At the beginning of the year, every CDN service provider was focussing on similar topics of performance, cost optimisation, the ability to unlock features and expansion within the country. But when we started observing the pandemic trends and seeing massive spikes in traffic, we made some substantial adjustments. One immediate focus was upon hardening our network and services to cater for the intensified traffic. We also began to segregate customers based upon whether they were now classified as an ‘essential services’ or how ‘spikey’ was their traffic. Our objective was to create zones of relative stable traffic which was easy to manage, and then dedicate other resources to customers with extremely spikey traffic to ensure we were better positioned to respond to unprecedented volumes or unplanned events. We also modified our collaboration with ISPs by having daily conversations with them, to garner insights on external events such as press conferences, release updates from gaming companies, etc with the view of anticipating high volume days and times.”
Having responded to the immediacy of the COVID crisis, many organisations are now seeking to improve their business agility, to continue to cater for the ongoing uncertainty of the pandemic, as well as set themselves up for the ‘next normal’ future landscape. For many organisations, this involves considerations of embarking upon a multi-CDN vendor strategy, as well as a desire to negotiate more flexible commercial agreement structures. The discussions below represent the attendees thoughts and issues on this topic.
“We are in a unique position as we are a publisher, Telco and a CDN. In terms of growth, what we are seeing is just a trend upwards and if anything, just an acceleration of that growth has happened as a result of COVID”, stated Jeremy Brown, Associate Director Video Delivery, Optus Sport. “We needed a multi-CDN strategy to create those redundancies that are very important not just on our outgoing CDN but through our origins and other parts of the network. And looking at shielding those products is very important to our strategy. With sport, we deliver huge amount of traffic in HD and high concurrency. QoS (quality of service) is a very important metric and there are a couple of really good players in the market, who create nice dashboards so we can monitor how we are performing.”
“We have seen that same increase or uptake in usage during COVD-19 as the others have reported”, says Craig Bruce, Head of Engineering – 9Now, Nine Network. “We have had some internal discussions going on regarding the viability of multi-CDN. We are a localised service within Australia, and we haven’t had huge issues from the CDN side of things. By and large we are quite predictable in our traffic with shows and events, current affairs and news content especially in the early part of the lockdown period. We are exploring on how to improve our service to end users for sure.”
“We are a business with streaming content, with streaming live racing across the country and we also run a live betting platform. When COVID first hit, we had to move all of our production remotely as also all of our tech teams”, notes Simon Mackay, Head of Technology, TAB New Zealand. “During the Melbourne Cup, we anticipate loads bigger than rest of the year, so we had to build a network that would withstand huge loads. As an example, we do about 1 million transactions just in race 7 of the Melbourne Cup over a 15-20-minute period. With regards to multi-vendor strategy, COVID hit us very hard, and we had to lean on our vendors very heavily to give us rebates from a long-term relationship point of view. The catch with having multi-vendors is that you dilute that ability to have a strong vendor who can hold you up when the revenues are not coming in. As much as we would like to have multiple vendors to create a bit of competitive tension, it’s also good when you have a vendor that has a good long-term relationship with you and that stood us well.”
Dennis Dovale, Manager, Media Technology, Tabcorp / Sky Racing mentioned, “what makes this year very different from last year is that our current traffic flows were higher than the Melbourne Cup last year and that’s because most of our venues are down and hence people are not in the venues and are using the digital platform. We are planning a 100% increase in traffic this year for Melbourne Cup through our CDN which is a big thing for us. The multi-CDN approach is great but we have the same problems of commitment with the current CDN and the amount of money we have to spend to have that in place, unless there is a new process within CDNs where we can talk about consumption model.”
“We have seen a huge change in traffic patterns over this COVID period from over 2 Million students learning on-campus to suddenly everybody moving home and connecting via Optus and TPG etc, and expect that to remain the same for a foreseeable time”, described David Wilde, Chief Technology Officer, AARNET. “The question of resilience, capacity and scaling are some lessons that came out of this experience. For the higher-ed sector, that ability to shift to learning from home is a completely different practice through a lot of the universities. Many of the regional universities have done that for a long time already, delivering online courses of various sorts. But for a lot of the bigger universities, the structure was built around students coming into lecture theatres. One interesting thing in our sector is that sometimes a DDoS [attack] is indistinguishable from a perfectly legitimate research work. So, a researcher who is pulling down 50gbs per second from the Hadron Collider or an astronomy telescope, looks a lot like DDoS, so we always have a practice around dealing with strange behaviours on the network compared to lock stock traffic that is legitimate or when it’s not.”
Levanes shared that, “in addition to DDoS, the number of security attacks has risen quite significantly during the pandemic. For example, people’s thirst for more knowledge around the COVID virus has been leveraged by hackers to unleash phishing scams to obtain user credentials in which to begin attacks upon Corporate resources. Hence an integrated approach to security is becoming extremely important.”
Demond also contributed to the topic by saying, “I concur with the views that up until recent times, the big value of having a single source approach is that you can negotiate a good rate with volume purchase buying power. But what we are hearing from our customs is that the increase risk of global outages and demands in the new normal has placed more influence on the business to consider the benefits of a multi-vendor approach and redundancies, because the cost savings don’t necessarily outweigh the business impact of any outages. Furthermore, when talking about multi-CDN, which is not new and has been around for a few years now, there has also been the notion of moving away from mandatory commitments. Probably one of the most common requests coming from our customers, is their desire to engage on a usage-based model, which we can support.”
The conversation also touched upon the emergence of new technologies, including: mesh delivery, the return of multicast, cloud edge computing, and new adaptive ecosystems where the entire video processing ecosystem adapts based on the demand or the network conditions. The participants discussed their challenges and investments in new technologies and business innovations.
Wilde shared that “automation and analytics is very important now in the higher-ed sector as things are going to be lean for some time. If you look at the billions of dollars that have been disappearing in terms of revenue, that’s not going to come back straight away, and the sector will be hurting for years. Therefore, the ability to be smarter about how we deliver services and how we support traffic is going to be key to the next couple of years.”
“With SD-WAN and the ability of the network to adapt to your current consumption, or the current needs of the customers, we are starting to see that pattern and continue to work on how we can apply that to the delivery and video process. The network defined video processing chain is definitely some of the things we are looking at from a Lumen standpoint”, says Demond.
Mark Wardle, VP Engineering & Operations, APAC, Encompass Digital Media says “We are talking to a lot of our vendors about operating cost models more than capital expenditure models so if channels or concepts want to pop up real quickly, higher speed to market and a flexible cost base is a key feature for us rather than having to invest a lot of Capex upfront requiring long commitments.”
Lumen CDN is focussed on delivering fast, secure, and reliable delivery and are experts in:
This brought to conclusion a highly interactive session with participation from the delegates and great discussions facilitated by Lumen. Focus Network facilitates a data-driven information hub for senior-level executives to leverage their learnings from, while at the same time assisting businesses in connecting with the most relevant partners to frame new relationships. With a cohort of knowledge-hungry and growth-minded delegates, these sessions impart great value for participants. With the advent of the new ways of working remotely, Focus Network continues to collaborate with the best thought leaders from the industry to still come together to share and navigate the ever-changing landscapes that’s barrelling into the neo industrial revolution.
Tags: Business AgilityCDNCenturyLinkCloud ServiceslumenNetworks
The post Lumen Roundtable: Enabling Global Media Delivery in Challenging Times appeared first on CIO Tech Asia.
]]>The post SentinelOne Roundtable: Minimising risk from cyber threats: focus on reducing time to containment appeared first on CIO Tech Asia.
]]>Focus Network, in partnership with SentinelOne, brought together leading IT Security executives to discover how they are dealing with the challenges of digital transformation and technology sprawl and how they view the opportunities around security automation such as:
The session was coordinated by Blake Tolmie, Director – Operations, Focus Network expertly moderated by Andrew Milroy, Principal adviser, Eco-System and providing great insights into the session was experienced strategy and thought leader Jan Tietze.
Jan Tietze, Director Security Strategy EMEA, SentinelOne – Before joining SentinelOne in 2020, Jan Tietze served in senior technical and management roles ranging from engineering to CIO and CTO roles for global IT and consultancy organisations. With a strong background in enterprise IT and an early career in senior field engineering roles in Microsoft and other security and consulting organisations, Jan understands real world risk, challenges and solutions and has been a trusted advisor to his clients for many years.
Andrew Milroy, Principal adviser at Eco-System, an analyst firm based in Singapore, was moderator for the session and welcomed the delegates to the interactive session. The roundtable was also joined by the SentinelOne team, which includes Jan Tietze, Director of security strategy, Evan Davidson, Vice President and head of the region. Lawrence Chan, Head of regional sales. And Kelvin Wee, Technical director for APAC.
The SentinelOne event in partnership with Focus Network, presented a theme of minimizing risk from cyber threats, focus on reducing ‘time to containment’. Security teams today are working hard on the front lines, identifying, analysing and mitigating threats. Yet despite all of their efforts, visibility into malicious activity remains challenging as the mean time to identify a security breach is still 197 days, which is quite astonishing with the mean time to containment being another 69 days after initial detection, according to the Ponemon Institute. The reality is that with the current reactive approaches to cyber defence, there simply aren’t enough skilled professionals to analyse the volume of incidents that most organisations face with limited resources, an ever growing skills gap and an escalating volume of security alerts organisations are left vulnerable to what is often perceived to be unavoidable risk. This environment is demanding more from already resource constrained CISOs and other cybersecurity professionals. Focus today is on how automation can help, specifically how it can help to drastically reduce the amount of uninvestigated and unresolved alerts, ultimate time-consuming investigations and remediate well known threats, act as a force multiplier for resource constrained security teams, reduce an organisation’s security risk exposure, including time to containment and remediation.
Defining incidents. A computer security incident is any adverse event that negatively impacts any of the three goals of security. Traditionally, information security has confidentiality, integrity and availability as its goals. And if the integrity is disturbed at the processing layer or its storage or transmission, then we have a computer security incident. Cybersecurity or infosecurity is the practice of ensuring that we have fewer of these incidents and that the impact is lower.
“It may be different in different industries, but we’re all part of a long global supply chain in one way or another or a local critical service. And safety is one of the most critical outcomes of our profession”, says Jan Tietze, Director Security Strategy EMEA, SentinelOne. “Incidents are the result of risk, and risk being the theoretical concept, is a quantifiable expected loss of operating information security and operating information systems, and you can look at it in an abstract way as the cost that an asset causes in an incident. In cybersecurity, we reduce either the cost that occurs during an incident, impact or the attack surface on account of the assets in that class, or we optimize the frequency per year through configuration and best practices by adding additional security controls. So, what we do is to optimize critical metrics and the risk management process. And I think it’s widely accepted that that infosec is actually a risk management discipline. “
There are three phases in any incident handling or incident response methodology; a pre-incident phase – before an incident occurs, a post-incident phase – after the incident has occurred. And then there is the phase during the incident or peri-incident where you actually deal with and handle the incident. And then there are different methodologies, like SANS and GARC and others that define what happens during each of those phases. This session has been oriented around the GARC methodology published in a SANS paper, but the concept always applies whether you think the lessons learned are part of the incident or at the post-incident phase, they all have lessons learned at some point after the incident has been captured. But the repeatable process, the one that occurs with every incident, is the phase during the incident. And hypothesis that we need to think about today is the one metric that we can use to influence the cost to the organisation, the risk to safety, the risk to availability of compute systems, which is the end-to-end ‘time to contain’. That’s the time it takes from the start of the incident, the compromise happening or from the attack starting the disruption to your business occurring until you have restored the trustworthy state of the compute environment.
Two weeks ago, at the University in Dusseldorf, attackers gained access, presumably last year through a Citrix security vulnerability that they exploited. They basically put that particular hospital on the list of places that they were going to work on compromising and performing the actual ransomware action when their project team had time to deal with it. The actual compromise happened a long time ago. There was a long time for detection in which they were not detected but had the access already.
Time to contain, to recap, is the time to regain full control of all the affected assets and then restore the trustworthiness of the environment. And that is the key metric that we are looking at today. We are able to decompose that into the individual phases of an incident, so it starts with the initial compromise/disruption, and then there are phases in which you deal with it. There’s a detection that needs to occur and sometimes there’s an automatic response that can occur. There needs to be alerts and the need to then identify whether the incident is real and whether it needs any manual follow up. And to optimize the end-to-end time spent between T0 when it occurs until keypunch, when you’re done, you can optimize everything, and it starts with detection and the efficacy of performing that detection.
Stronger approaches to containment do not rely on prior knowledge as much and they can use programmatic detection that is autonomous and does not need to have a human in the process in order to take telemetry or take information about what has occurred on an endpoint to raise a detection. In the Ponemon study from 2018, they did a study commissioned by IBM about what this ‘dwell time’ until you have the first detection in a system is, and found that the mean for that was 197 days. Even though they looked at large scale incidents, 200 days is still a long time to act and time that goes largely unused. In those instances, enacting automatic responses is more effective than having manual responses, because of the criticality of speed. However, a lot of organisations struggle to implement that because of high false positive counts. The reason for that is due to end point detection and response systems that are based on telemetry from endpoints, struggle to identify benign behaviour from malicious behaviour resulting in high false positive counts, which then means you can’t really use automation.
Automated controls require a high signal to noise ratio, and not all systems are equipped to provide that. Alerts typically flow in as a nearly endless source of actions to respond to or of input to respond to for the people and security operations or in the cyber defence center. They have the tendency to ignore some or all of them. In fact, there are systems that specialize in ignoring alerts by means of correlation of loss from multiple sources and try to prioritize for you. Better systems provide you with prioritized alerts so that you know what to do and that you can focus on a small number of events rather than spending your day sifting through hundreds of alerts. And identification is better if more of it is automated because if there’s an intense human element to perform the initial analysis, like the need to follow process IDs and look at different systems, it is difficult to have a workflow where one collaborates on resolving an incident making it less effective.
It is really important and imperative today that we correlate individual actions and make that available. And one of the manifestations of not correlating well is that people complain about being understaffed and of skills shortage in our industry. However, very often it’s a symptom of not having well integrated systems and having systems that are really disparate with a need to switch to a context between one tool and another and not being able to operate all the things that are at our disposal. Whenever the boundaries between systems are crossed, there is typically no correlation of forces. And this whole process from detection to containment on average, takes between 56 and 69 days, depending on who you look at. While the Mandiant security report 2020 gives the lower end and higher figures, it’s somewhere in that time range that mean ‘time to contain’ after detection the industry can work on.
Ultimately the business outcome is to be able to recover quickly when an incident occurs and knowing that it has occurred. There are different and competing approaches. Some of those have merit while others are outdated, like the ones that rely on prior knowledge. But in general, technology is less relevant compared to the desired outcome. To elaborate further, there are six principles that are important that we can distil from this line of thinking:
Automation should take precedence over human work. If you can automate a response and afford to do that from a risk management perspective or a false positive perspective, then you should be doing that because that stops many incidents, cold and early in their tracks. While there is no glory in prevention, automation is what will eventually stop a small-scale incident from becoming a large-scale incident.
Autonomy is another concept that is important and that’s not having dependencies to having to send data somewhere in order to be able to respond and work on it. But being able to perform the action and being able to perform the detection without needing to consume outside knowledge.
Correlation is really important in making the humans more effective, like making the human responder understand that what has happened is a function of correlating the information and correlating the telemetry and bringing to their attention what really occurred during the incident. Correlation is everything to help you make sense of something that has technically been observed. And when you need to respond, there’s an aspect of having visibility across the environment and different kinds of assets that may have different tooling. Hence having the end-to-end visibility and ability to respond quickly from one place is of paramount importance with situations.
A good example is the BlueKeep vulnerability in RDP that was there for 19 years until it was discovered by the NSA that it had been actively exploited for a significant portion of that time. People were closing RDP as a preventative measure. Having the ability to perform these response actions in one place is very powerful because it gives the SOC the tools that they need in order to effectively respond, implement lessons learned and prevent other incidents from occurring.
Azril Rahim, Senior Manager, IT Security, Tenaga Nasional Berhad, a Malaysian utility, says, “If we if we follow the concept in a court of law, there is a term called TKO (Total Knock Out). In this case, the most important issue is detection and without good detection the rest of the process is TKO. Hence we really need to address end point detection to a greater extent.”
“The lowest hanging fruit in terms of where we can save the most time and probably have the biggest chance to reduce the impact is by reducing the time to detection, because allowing attackers to compromise an environment and establish multiple persistence points basically means that we’re kind of blind when we’re responding to the first time we see a detection for the incident”, noted Jan Tietze, Director Security Strategy EMEA, SentinelOne. “There has been since 2012 the emerging market of endpoint detection and response. And when you look at the numbers, for instance, in Mandiant’s reports, what you see is that the mean dwell time has reduced since then. And I think that is largely due to the introduction of endpoint detection and response technologies, which aim to use telemetry to look at behaviours and describe not a concrete threat but describe the behaviours that make up an attack. However, many of those require concrete knowledge of what a particular attack looks like. In other words, they will not detect BlueKeep, which was unknown for 19 years until the day that it became unknown vulnerability and then proof of concept exploits in the wild that could be defined so that you knew what the behaviour looked like. They were looking for known bad. And I think the more effective technologies are the ones that don’t look for known bad, but that look for ‘post exploitation attacker behaviours’ like there are things that any attacker needs to do after they compromise an environment. If you assume that you’re getting hit with a brand-new vulnerability that wasn’t known before, the attackers still need to do something to act on their objective. They still need to perform reconnaissance. They need to actually move from one device to another in your organisation and exfiltrate information. They still need to exfiltrate credentials depending on what they’re after. So to get to those objectives, they will need to perform these actions and detecting those gives you a very short time span between them compromising an environment with a completely new attack that’s currently undetectable and knowing that they’re there and being able to respond and then respond automatically.”
“I think it’s (automation) a very hot topic on which one, which we definitely have most of the challenges,” says Thomas Robert, Global Head of Infrastructure Operations, CACIB. “We managed to work some of those measures that were mentioned with the relevant correlation and some automation, but primarily based on the scenario. And that’s where it’s a been challenging as usually the scenario that you build is based on the past experience and not necessarily forward looking into what could happen, but we have this strategy of Cybersecurity Correlation and Automation (CSCA) over the years now. it’s not necessarily that well integrated and I’m definitely looking forward to solutions that are more interacting between each other in terms of sharing the information dynamically. So, the analysis of the site and on the behaviour is more relevant than what you can get with individual systems. That is also complicated when you when you use multiple systems usually coming from different vendors and it’s not always easy to have to have a good level of interaction. On the other side. If you go with only one vendor, you can get a good level of integration. But it’s better to have different vendors so we have different perspectives on the scenario. And you can provide better protection than a single one which might have a failure in one domain.”
Steve Ng, VP, Digital Platform Operations, Mediacorp Pte Ltd notes that,” The current situation give us more time to really explore some learning experience, either on our own, like learning some of the new skills, new tactics, new approach, as well as a lot more online engagement with the vendors. We learn a lot from each other. And currently what we are doing is also testing and prototyping some new approaches on cloud to help us do continuous trend hunting. That give us a better scan of our perimeters by knowing what we have inside and what is on the perimeter, including what’s coming. So that is also one of the areas that we are developing the capability and competency now. We should have this platform up and running soon. So, although we are working from home, we actually can deliver substantial project improvements to our security posture.”
“We are looking at how to improve on our security posture,” says Soon Tein Lim, Vice President, Corporate Development, ST Engineering Electronics. “For example, for remote work from home, we have a VPN for most of the users. And in the process, we realized that users when at home, can turn on the Internet, but they don’t go into the VPN immediately using the office computer. Hence, we have installed GlobalProtect, a particular software from Palo Alto to force whenever the user when a home onto Wi-Fi, the whole connection is coming back to office. At the end point we have introduced EDR, the employee response system and we also have SOC monitoring. “
“There should be an agent on servers and an agent on clients, as also that protects Kubernetes, docker environments and Linux workloads. There should literally not be a computer workload that does not have that has any sort of value to these. That’s an extra protective system that looks at flows coming in and filtering out, says Jan Tietze, Director Security Strategy EMEA, SentinelOne. “The scope differs among those solutions and when you have response options, very often you have manual response options. Manual means you use the EDR agent to enforce or respond, but it needs to be initiated by a human and I think that’s just too slow. And the types of responses, very often things like writing a script are killing a process by being very pinpointed in your response. Whereas what we do, for instance, we give you the ability to just roll back all of the actions that are associated with a chain of events so you can correlate what happens in real time on the endpoint and we can roll back all of the changes performed by that chain of events. Also, if you find out a user logs in with compromised credentials and performs a series of actions, and one of those is then identified as malicious, we can go back to the beginning of the actions in that session and remove what they have performed.”
Few years from now, we’re not ever going to talk about on-prem in this particular context because of two things. And it depends on the region you’re in. There are regions in the world that are very likely to stick with on premise solutions for regulatory reasons or for security data sovereignty or distrust or like many of those kinds of reasons. And Germany is one of those markets. As also in the Middle East, where you see the same kinds of issues. However, the scale at which you need to process telemetry data does not lend itself well to doing it on prem. You literally need hundreds of servers in order to satisfy relatively simple queries. Or you end up with a telemetry system in EDR that allows you to hunt for a very limited set of data, for a limited set of time and. Or end up building gargantuan infrastructures that you then have to maintain and ship the data to and as users are more mobile and work remotely, etc., and we’re in a more connected world, the kind of natural flow of that data is to a place that has a lot of them and a lot of compute.
Mac Esmilla, Global CISO, World Vision International, acknowledged the skills gap in the market saying, “We have a big team of IT folks in the Philippines of about approximately 400. There’s no shortage of IT technical people and experience with working with tools. But there’s a great shortage in people who understand cyber security. We can easily find people who are good with the technical skills, but not with the security skills. Process is very important and there’s a lack of people with good understanding of process, especially how to interface with legal processes, data protection requirements etc. hence we do a lot of training and enablement, and also, we pick partners who have good knowledge transfer enablement programs. It’s good to have partners who actually know what they’re doing, not just selling you a “buy a tool and you look cool”. You’re really entering into a partnership, not just subscribing to a tool or a technology kit. So, we’re very conscious about, partnering with the right people, with the right mentality, with the right experience and with the right attitude as well.
“In terms of the interactions that we have with various customers, the need for containment and isolation has been a very painful point for them”, shared Kelvin Wee, Technical Director – APAC, SentinelOne. “Lack of automation is also one of those key points. Although when it comes to going down even deeper, having the ability to do the forensics and all that, automation becomes vital. In many cases because of the lack of insights, customers lose the visibility in those aspects, and become kind of lost in that whole situation. And that’s where they look for other technologies and partners like us who are able to provide them with guidance in the trusted advisor role. So that’s how we actually help them.”
“SentinelOne has a readiness package”, referred Lawrence Chan, Head of Regional Sales. “The main intention of this package is to coach our customer to on-board the tool itself. We go through the process, perform the actions and do the best with the end-to-end remediation, containment, annotation and rollbacks. The main intention of this exercise is to hand over a brand-new clean dashboard that’s better for the customer and they continue to monitor on that. SentinelOne Readiness uses a structured methodology and personalized assistance for deployment planning. Readiness includes environment discovery, SaaS configuration assistance, and staged agent pushes to get you endpoint coverage as fast as possible.”
“Mac Esmilla mentioned about choosing the right partner and that really resonated with me. I think that cybersecurity is a very fast-moving space and whoever has a good solution today does not necessarily have it a year down the road”, noted Jan Tietze. “I’ve worked for companies that have solved one hard problem very well and unfortunately failed to address other problems that have developed in the space over time. Very often SOCs have very little visibility of that mode of operating and that mode of deployment are not part of those ICT pipelines for continuous integration and deployment. I think the right partner is one that has demonstrated the ability to both listen to customers to address these new and emerging landscapes, as well as addressing new and emerging threats and works with customers also as part of their innovation process. I think that’s really key. It’s not a static purchase, but a longer-term partnership.”
That brought to conclusion a very exciting conversation that touched on a lot of the most pertinent and relevant issues in cybersecurity generally, but more specifically around automation and some of the issues and challenges associated with it. The highly interactive session with participation from the delegates across the APAC region and great discussions was put together by SentinelOne.
Focus Network facilitates a data-driven information hub for senior-level executives to leverage their learnings from, while at the same time assisting businesses in connecting with the most relevant partners to frame new relationships. With a cohort of knowledge hungry and growth minded delegates, these sessions have seen imparting great value for participants. With the advent of the new ways of working remotely, Focus Network continues to collaborate with the best thought leaders from the industry to still come together to share and navigate the ever-changing landscapes that’s barrelling into the neo industrial revolution.
Tags: automationcyber threatSentinelOneTime To Contain
The post SentinelOne Roundtable: Minimising risk from cyber threats: focus on reducing time to containment appeared first on CIO Tech Asia.
]]>