Workshop on Security Operation Center Operations and Construction

Monday, 27 February

  • 08:30 - 09:00
    Introductory Remarks
    Cockatoo Room
  • 09:00 - 10:15
    Session I
    Cockatoo Room
    • In this presentation, I will be speaking about the challenges I’ve encountered between cybersecurity researchers and practitioners. I will speak how both academics and practitioners are focused on the common goal of cybersecurity but have fundamentally different approaches and incentives. Having completed a PhD in academia and taking a research job in industry, I found myself out of place sometimes in both places.

      I plan to present how academics tend to focus on theoretical ideas of new technology for security with some of the better projects grounding the research in an applied way while conversely, practitioners have the applied approach of needing to solve a problem in the real world. They also attempt to create new theories or perspectives from their work. However, learning from each other seems to be challenging. Academics contribute new knowledge with papers that can be inaccessible because of infrastructure means of finding related articles to their own work or because of the esoteric language in which we write academic literature. Security practitioners on the other hand, focus on the structure and organization of running security operations. These business practices tend to be largely unknown at academic institutions. Graduate programs focusing on security might be aware of technological problems facing cybersecurity platforms but can be unaware of how cybersecurity teams and divisions have been evolving. I hope to present these perspectives and offer some potential solutions from creating newly created practitioner focused journals to creating large public datasets from real world data.

      Speaker's Biography
      Chris Fennell is a Human and Computer Interaction (HCI) researcher who works in cybersecurity and his research focuses on how individuals interact with security technologies. His interests have led him to publish research on a variety of different cybersecurity topics such as understanding how individuals self-report security technology to designing a blockchain system for the poultry industry using participatory design workshops. He currently works with the threat hunting team on a myriad of different projects from participating in focused hunts to developing machine learning models for analysis. He is actively involved with the academic and practitioner research communities and was privileged to have worked in academia, industry and government. He holds a Bachelor of Science in Computer Science from Grace College and a PhD in Information from Michigan State University

    • One of the hardest challenges for companies and their officers is determining how much to spend on cybersecurity and the appropriate allocation of those resources. Security “investments” are a cost on the ledger, and as such, companies do not want to spend more on security than they have to. The question most boards have is “how much security is enough?” and “how good is our security program?” Most CISOs and SOC teams have a hard time answering these questions for a lack of data and framework to measure risk and compare with other similar sized companies. This paper presents a data-driven practical approach to assessing and scoring cybersecurity risk that can be used to allocate resources efficiently a nd mitigate cybersecurity risk in areas that need it the most. We combine both static and dynamic measures of risk to give a composite score more indicative of cybersecurity risk over static measures alone.

    • Drew Walsh, Kevin Conklin (Deloitte)

      SOCs can be expensive, difficult to scale, and time-consuming for analysts. In this talk, we will outline benefits of a cloud-hosted SOC utilizing cloud-native tools and technologies. We will discuss Deloitte’s implementation of this design; the technologic, economic, and analytic improvements this design provides; as well as proof points we experienced in our implementation of a cloud-hosted SOC.

      SOCs are typically designed to meet the near to mid-term needs of an organization and their data capacity can be quickly outpaced by the scale of monitoring sources and reporting needs. SOCs often don’t natively scale appropriately when adding new data sources; when the organization experiences growth; or requirements of the SOC for reporting, mitigation, and response increase. Cloud-native tools and technologies within a cloud-hosted environment enable scalable SOC platforms to support threat hunt, incident response, reporting, and more without data storage limits, high platform response times, and high manual hours on keyboard. Our cloud-hosted SOC platform has shown significant improvements in platform operation and maintenance (O&M), with reduced costs for data storage and access as well as increased productivity of personnel on platform via automation, data speeds, and cloud efficiencies. The cloud-hosted SOC architecture provides several downstream advantages. Deloitte has demonstrated the ability to process data from multiple Zeek sensors in excess of 10Gbps with near real-time processing speeds and store petabytes of data without compromising on ingested data sources. This control over data transfer and added benefit of processing data in the cloud paves the way for additional edge analytic capabilities. Teams can develop analytics to compute at processing to identify near real-time activity and/or filter unwanted data that would otherwise burden a datastore.

      Speakers' Biographies
      Drew Walsh is an Advisory Manager in Deloitte’s Government and Public Services practice. He has contributed to and leads the research and development of big data cloud architectures and analytics applied to cyber monitoring and anomaly detection. He holds a B.S in Computer Science from West Chester University, an M.S in Information Security Policy and Management from Carnegie Mellon University, and the CISSP.

      Kevin Conklin is a Systems Architect in Deloitte’s Government and Public Services practice. He contributes to and leads big data cloud pipeline engineering, data visualization, database migration, and AI/ML development in both AWS and GCP. He holds both a B.S. in Mathematics and an M.S. in Business Analytics from Arizona State University.

    • Nidhi Rastogi, Md Tanvirul Alam (Rochester Institute of Technology)

      Cyber threat intelligence (CTI) has been valuable to SOC analysts investigating emerging and known threats and attacks. However, the reach is still limited, and the adoption could be higher. While CTI has consistently proven to be a rich source of threat indicators and patterns collected by peer security researchers, other researchers have occasionally found them helpful. Challenges include intelligence in the CTI documented in an unstructured format, embedded in a large amount of text, making it challenging to integrate them effectively with existing threat intelligence analysis tools for internal system logs. In this paper, we detail ongoing research in threat intelligence extraction, integration, and analysis at different levels of granularity from unstructured threat analysis reports. We share ongoing challenges and provide recommendations to overcome them.

  • 10:15 - 10:45
    Morning Coffee Break
    Boardroom with Foyer
  • 10:45 - 11:45
    Session II
    Cockatoo Room
    • Internet exposures are often created unintentionally, and they leave organizations vulnerable to a variety of cyberattacks. In recent years, there has been an unprecedented increase in the use of automation by adversaries for reconnaissance and exploitation. While sophisticated attackers continue using automation to scan the internet for vulnerabilities in order to actively exploit them, how about using it to not only monitor your organization’s attack surface, but actively remediating publicly exposed assets and cloud misconfigurations? One of the biggest offenders (increasing with the demands for telework and cloud computing) is the Remote Desktop Protocol (RDP), which has been determined to be the most utilized initial attack vector for ransomware gangs. With the average cost of a successful ransomware attack totaling over $300k, even a small misconfiguration can become something that all enterprises want to avoid and mitigate as soon as possible. Defensive automation combined with active remediation can be a first necessary step for organizations to prevent such inevitable configuration slips becoming hundreds of thousands of dollars of damage and headline news.

      Talk outline
      External Attack Surface Management (EASM) is the process of continuously identifying, monitoring and managing all internet-connected assets for potential attack vectors, exposures and risks. However, an ASM solution and attack surface management plan are only parts of the whole equation, because after the exposures have been determined, remediation needs to be prompt and swift. Remember that every second a critical exposure, like RDP open to the internet, is out there, is another opportunity for it to be used as a ransomware attack vector that can cost your organization hundreds of thousands of dollars. Therefore, automation that can collect more information on a vulnerability, notify the right asset owners, and implement remediation as fast as possible should be available to a SOC for easy deployment.

      Automated incident response is complicated to create, implement, and execute. It requires several tasks including collection of information about an asset, determining the potential service owner, sending a notification to the service owner, and creating a run book. It is challenging to build such automation as the APIs for product change, credentials need to be securely stored and shared, and true alert triggers should be generated with minimal latency. In this talk, I will present an automation solution that overcomes these challenges and helps an organization remediate the unexpected exposure of assets (e.g., RDP) to the internet.

      Speaker's Biography

      • Johnathan Wilkes is a Security Architect with Palo Alto Networks
      • He has worked at Palo Alto Networks for over 2 years
      • Before automating Attack Surface Management remediation, he assisted a state government automate their security operations center
      • He has been helping enterprise and government customers with security and network automation for over 8 years
    • Jakob Nyber, Pontus Johnson (KTH Royal Institute of Technology)

      We implemented and evaluated an automated cyber defense agent. The agent takes security alerts as input and uses reinforcement learning to learn a policy for executing predefined defensive measures. The defender policies were trained in an environment intended to simulate a cyber attack. In the simulation, an attacking agent attempts to capture targets in the environment, while the defender attempts to protect them by enabling defenses. The environment was modeled using attack graphs based on the Meta Attack Language language. We assumed that defensive measures have downtime costs, meaning that the defender agent was penalized for using them. We also assumed that the environment was equipped with an imperfect intrusion detection system that occasionally produces erroneous alerts based on the environment state. To evaluate the setup, we trained the defensive agent with different volumes of intrusion detection system noise. We also trained agents with different attacker strategies and graph sizes. In experiments, the defensive agent using policies trained with reinforcement learning outperformed agents using heuristic policies. Experiments also demonstrated that the policies could generalize across different attacker strategies. However, the performance of the learned policies decreased as the attack graphs increased in size.

    • Threat hunting is the cybersecurity practice of proactively searching for malicious activity within an environment. With the arrival of newer technologies and techniques such as machine learning (ML), these tools help cybersecurity teams to effectively examine broad areas of data by providing metrics for particular datasets. This paper explores the utility of having multiple ML scores generated by separate models against a sanitized subset of data. Utilizing dashboards of the scores provides different perspectives of the same dataset. A low score in one model may very well be a high score in another. This ability allows threat hunters to approach the data through different perspectives and to raise awareness of unique data points that might have otherwise been ignored. Our findings indicate that the greatest utility this approach offers for threat hunting is not in its summative approach of scoring all the data but in its discriminant ability of comparing the different models scores.

      Speaker's Biography
      Adam Hoffman is a Technical Expert on the UEBA Cybersecurity team with over 12 combined years at Walmart. He has extensive experience in various facets of data analysis including database management, data visualization using various tools/languages, data engineering, and practical machine learning solutions. Adam is known for having the self-discipline to continuously learn and a passion of applying Data Science methodologies within the Security Operation Center and Incident Response domains. He has made a considerable impact that has enabled faster and more agile responses to threats. Adam has received formal recognition at Walmart for his accomplishments including the Making a Difference Award and the Star Award. He holds a Bachelor of Science degree in Marketing Management from the University of Arkansas.

  • 11:45 - 12:30
    Conclusion and Wrap Up
    Cockatoo Room