Humans and Automation: Augmenting Security Operation Centers
Abstract
:1. Introduction
2. Materials and Methods
3. Results and Findings
3.1. SOC Challenges Necessitating Automation
3.1.1. Lack of Cyber Threat Intelligence
3.1.2. Ineffective Identification of Threats
3.1.3. Alert Fatigue
3.1.4. Alert Reporting Constraints
3.2. SOC Automation Application Areas
3.2.1. Automated Cyber Threat Intelligence (CTI) Information
- Findings: Overall, we classify automated CTI tools operating in four sequential phases:
- Collect threat information.
- Analyze and convert threat information into threat intelligence.
- Validate the cyber threat intelligence against internal alert information to determine the presence or absence of intrusions.
- Generate indicators of compromise (IOCs).
3.2.2. Automated Incident Detection
- Findings: Overall, in many of the examined articles, automation was commonly used for three primary purposes in the detection phase:
- Connecting and correlating alerts from different security tools integrated into the SOC.
- Gathering suspicious alerts in context with their surrounding (and related) alerts to provide richer context.
- Grouping alerts based on similar characteristics and IOCs (often called clustering). For instance, ref. [55] advised that automation should analyze flagged alerts in context with similar previous incidents to allow analysts to determine how to remediate them.
3.2.3. Automated Alert Context
- Findings: Within this theme, our analysis led us to interpret that alert context is first extracted and then generated:
- Context extraction refers to collating organizational information affected by security alerts (i.e., pulling context from additional sources beyond the SOC). The more information that is collected from around the organization, such as business assets affected and processes disrupted, the better decision-making will be [58,59]
3.2.4. Automated Alert Reports (Explainable and Interpretable)
- Interpretability refers to automation’s ability to produce information that security analysts can comprehend and understand. In SOC automation, interpretability occurs at the alert level, referring to the security event flagged. It can be measured by the analyst’s ability to piece the intrusion storyline together.
- Explainability occurs at the system level, whereby automated tools provide analysts with an explanation and justification of how they operate and why they have arrived at a particular decision. Ref. [37] (p. 112394) define explainability as “following the reasoning behind all the possible outcomes [whereby] models shed light on the model’s decision-making process.” Hence, explainable automation makes significant efforts to keep the human-in-the-loop. While these two constructs can exist independently, we recommend that they be coupled together, as seen at the top of Figure 4.
- Sufficient interpretability and explanation lead to increased understanding. However, the understandability of this information depends on the audience and their skill level. Ref. [51] state that information presented to different stakeholders must adhere to their expertise and terminology.
- If the user-level expertise cannot comprehend the information already presented, post hoc explainability in the form of data visualizations can be presented [37] but may come with performance issue trade-offs.
3.2.5. Automated Incident Response
- Findings: AI and ML solutions in the incident response phase must be accompanied by contextual information and data visualizations to aid analysts’ understanding of how these systems arrive at solutions [58]. Furthermore, when these systems are utilized, it is imperative to know when they are better off as recommendation engines or implementation agents. Also relevant to the discussion of AI and ML models is whether they are subject to supervised or unsupervised learning, a topic that sparked much debate within the literature reviewed [36,37,38,64]. We conclude that the critical nature of missed alerts demands highly trained supervised learning models to ensure greater accuracy in detecting threats. Nevertheless, due to the unpredictable nature of cybersecurity and its complexity, security analysts can gain advantages from using unsupervised models that detect abnormalities and patterns beyond traditional tool recognition. Hence, we suggest adopting a semi-supervised approach. The varying levels of autonomy illustrate that employing AI and ML solutions in incident response can be completed to varying degrees, with the recommendation that this be applied on a case-by-case basis. To better understand when to harness automation in the response phase and the level of automation that should be used, we recommend applying the four critical success factors put forward by [65]: (1) task-based automation, (2) process-based automation, (3) automation performance appraisal, and (4) SOC analyst training of automation systems.
3.3. SOC Automation Implications on Analysts
- Findings: To avoid bias and complacency, trust in automation must be calibrated so that well-performing tools are utilized accordingly and inconsistent tools are treated apprehensively. Factoring in the findings from previous sections showcases that providing clear interpretations of events to analysts will assist in preventing situations in which objectively well-performing tools are underutilized (i.e., preventing automation disuse). Additionally, the exposure to bias and complacency can be reduced by applying higher levels of automation to well-defined processes and against well-modeled, less severe threats. Contrary to that, ambiguous processes must be overseen and managed by security analysts [12]. The automatic execution of mitigation and restoration strategies is increasingly recognized in the literature. For instance, ref. [13] introduced an Action Recommendation Engine (ARE) that continually monitors network systems, offering tailored recommendations for actions to respond to identified malicious traffic. Based on raw data, alert reports, system feedback from previously recommended actions, and the severity of the threat, the engine is built to either provide an action recommendation or directly apply the action itself. When following the latter approach, the authors acknowledge that this leads to the “human-out-of-the-loop”. The action recommendation engine illustrates the degree to which levels of automation can be applied.
3.4. SOC Automation Human Factor Sentiment
- Findings: To achieve a mutually beneficial relationship between security analysts and automation, involving the analysts during the development and integration of new tools is critical. Automation will only be as skilled as the analysts who program it. Therefore, recognizing the domain knowledge that analysts possess, specifically concerning Tier 3 analysts, will result in more intelligent technical solutions being identified. It is important to note that an organization will derive little benefit from an environment purely analyst or automation-based. Instead, the two entities must complement each other.
4. Discussion
- Degree of Automation: The number of processes that a SOC automates. For example, SOCs with high degrees of automation will have automated processes throughout the incident response lifecycle. SOCs with low degrees of automation may only automate the detection of alerts. This can also be described as the breadth of automation.
- Level of Automation: The level of autonomy that automated SOC processes possess. For example, high levels of automation in the response phase will recommend and implement response actions, with the analyst’s option to intercede if necessary. Conversely, lower levels of automation in the response process may only provide analysts with several alternative response strategies. This can also be described as the depth of automation.
4.1. The SOC Automation Matrix
- Quadrant 1—Low Automation + High Human
- Quadrant 2—High Automation + High Human
- Quadrant 3—Low Automation + Low Human
- Quadrant 4—High Automation + Low Human
4.2. The SOC Automation Matrix—Considering Levels of Automation
4.3. Limitations and Future Research
5. Conclusions and Contribution
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Search Strings by Select Database
Database | Search String | Results |
---|---|---|
ScienceDirect (199) | Secondary Objective #1 Title, abstract, or author-specified keywords (“Security Operations Center” OR “Network Operation Center” OR “Cybersecurity Operation” OR “Cyber Security Operation” OR “Incident Response”) AND (“Automate” OR “Automation” OR “Decision Support” OR “Decision Aid”) | 39 |
Secondary Objective #2 Title, abstract, or author-specified keywords (“Security Operations Center” OR “Network Operations Center” OR “Cybersecurity” OR “Cyber Security”) AND (“SOC Analyst” OR “Analyst” OR “Security Analyst” OR “Human”) AND Find articles with these terms (“Automate” OR “Automation” OR “Decision Support” OR “Decision Aid” OR “Technical Control”) AND (“Complacency” OR “Bias” OR “Trust”) | 160 | |
Total after removing duplicates: 193 |
Appendix B. SOC Challenges Thematic Maps
Appendix C. Automation Application Areas Thematic Map
Appendix D. SOC Automation Restrictive Characteristics
Characteristic | Details | Results Informing Characteristic |
---|---|---|
Black-box automation | Automation lacks transparency in how it operates. |
|
Automation competes on low false-negative rates | The premise is not to let any alert slip through. This comes at the cost of not fine-tuning sensors, resulting in high alert volumes. |
|
Tool overload | SOCs are inundated with new automated tools instead of optimizing the use of select tools. |
|
Unsupervised machine learning | Tools are not trained based on the expertise of SOC analyst insights but learn independently. |
|
Appendix E. SOC Automation Scenarios
SOC | Automated Processes and Levels of Automation |
---|---|
SOC 2A |
|
SOC 2B |
|
References
- Chiba, D.; Akiyama, M.; Otsuki, Y.; Hada, H.; Yagi, T.; Fiebig, T.; Van Eeten, M. DomainPrio: Prioritizing Domain Name Investigations to Improve SOC Efficiency. IEEE Access 2022, 10, 34352–34368. [Google Scholar] [CrossRef]
- Husák, M.; Čermák, M. SoK: Applications and Challenges of Using Recommender Systems in Cybersecurity Incident Handling and Response. In ARES ’22: Proceedings of the 17th International Conference on Availability, Reliability and Security, Vienna Austria, 23–26 August 2022; Association for Computing Machinery: New York, NY, USA, 2022. [Google Scholar] [CrossRef]
- Coro Cybersecurity. 2024 SME Security Workload Impact Report; Coro Cybersecurity. 2023; pp. 1–11. Available online: https://www.coro.net/sme-security-workload-impact-report (accessed on 12 January 2024).
- Singh, I.L.; Molloy, R.; Parasuraman, R. Individual Differences in Monitoring Failures of Automation. J. Gen. Psychol. 1993, 120, 357–373. [Google Scholar] [CrossRef]
- Skitka, L.J.; Mosier, K.L.; Burdick, M. Does Automation Bias Decision-Making? Int. J. Hum.-Comput. Stud. 1999, 51, 991–1006. [Google Scholar] [CrossRef]
- Brown, P.; Christensen, K.; Schuster, D. An Investigation of Trust in a Cyber Security Tool. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2016, 60, 1454–1458. [Google Scholar] [CrossRef]
- Butavicius, M.; Parsons, K.; Lillie, M.; McCormac, A.; Pattinson, M.; Calic, D. When believing in technology leads to poor cyber security: Development of a trust in technical controls scale. Comput. Secur. 2020, 98, 102020. [Google Scholar] [CrossRef]
- Cain, A.A. Trust and Complacency in Cyber Security. Master’s Thesis, San Jose State University, San Jose, CA, USA, 2016. [Google Scholar] [CrossRef]
- Chen, J.; Mishler, S.; Hu, B. Automation Error Type and Methods of Communicating Automation Reliability Affect Trust and Performance: An Empirical Study in the Cyber Domain. IEEE Trans. Human-Mach. Syst. 2021, 51, 463–473. [Google Scholar] [CrossRef]
- Lyn Paul, C.; Blaha, L.M.; Fallon, C.K.; Gonzalez, C.; Gutzwiller, R.S. Opportunities and Challenges for Human-Machine Teaming in Cybersecurity Operations. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2019, 63, 442–446. [Google Scholar] [CrossRef]
- Ryan, T.J.; Alarcon, G.M.; Walter, C.; Gamble, R.; Jessup, S.A.; Capiola, A.; Pfahler, M.D. Trust in Automated Software Repair: The Effects of Repair Source, Transparency, and Programmer Experience on Perceived Trustworthiness and Trust. In Proceedings of the HCI for Cybersecurity, Privacy and Trust, Orlando, FL, USA, 26–31 July 2019; Moallem, A., Ed.; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; Volume 11594, pp. 452–470. [Google Scholar] [CrossRef]
- Bridges, R.A.; Rice, A.E.; Oesch, S.; Nichols, J.A.; Watson, C.; Spakes, K.; Norem, S.; Huettel, M.; Jewell, B.; Weber, B.; et al. Testing SOAR Tools in Use. Comput. Secur. 2023, 129, 103201. [Google Scholar] [CrossRef]
- Altamimi, S.; Altamimi, B.; Côté, D.; Shirmohammadi, S. Toward a Superintelligent Action Recommender for Network Operation Centers Using Reinforcement Learning. IEEE Access 2023, 11, 20216–20229. [Google Scholar] [CrossRef]
- Crowley, C.; Filkins, B.; Pescatore, J. SANS 2023 SOC Survey; SANS Analyst Program; SANS Institure: Rockville, MA, USA, 2023; pp. 1–24. Available online: https://www.sans.org/white-papers/2023-sans-soc-survey/ (accessed on 8 February 2024).
- Agyepong, E.; Cherdantseva, Y.; Reinecke, P.; Burnap, P. A Systematic Method for Measuring the Performance of a Cyber Security Operations Centre Analyst. Comput. Secur. 2023, 124, 102959. [Google Scholar] [CrossRef]
- Shahjee, D.; Ware, N. Integrated Network and Security Operation Center: A Systematic Analysis. IEEE Access 2022, 10. [Google Scholar] [CrossRef]
- Sheridan, T.B.; Parasuraman, R. Human-Automation Interaction. Rev. Hum. Factors Ergon. 2005, 1, 89–129. [Google Scholar] [CrossRef]
- Hauptman, A.I.; Schelble, B.G.; McNeese, N.J.; Madathil, K.C. Adapt and Overcome: Perceptions of Adaptive Autonomous Agents for Human-AI Teaming. Comput. Hum. Behav. 2023, 138. [Google Scholar] [CrossRef]
- Miller, C.A.; Parasuraman, R. Beyond Levels of Automation: An Architecture for More Flexible Human-Automation Collaboration. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2003, 47, 182–186. [Google Scholar] [CrossRef]
- Applebaum, A.; Johnson, S.; Limiero, S.; Smith, M. Smith Playbook Oriented Cyber Response. In Proceedings of the 2018 National Cyber Summit (NCS), Huntsville, AL, USA, 5 June 2018; pp. 8–15. [Google Scholar] [CrossRef]
- Peters, M.D.J.; Marnie, C.; Tricco, A.C.; Pollock, D.; Munn, Z.; Alexander, L.; McInerney, P.; Godfrey, C.M.; Khalil, H. Updated Methodological Guidance for the Conduct of Scoping Reviews. JBI Evid. Synth. 2020, 18, 2119–2126. [Google Scholar] [CrossRef] [PubMed]
- Braun, V.; Clarke, V. Using Thematic Analysis in Psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
- Chamkar, S.A.; Maleh, Y.; Gherabi, N. The Human Factor Capabilities in Security Operation Centers (SOC). EDPACS 2022, 66, 1–14. [Google Scholar] [CrossRef]
- Kokulu, F.B.; Soneji, A.; Bao, T.; Shoshitaishvili, Y.; Zhao, Z.; Doupé, A.; Ahn, G.-J. Matched and Mismatched SOCs: A Qualitative Study on Security Operations Center Issues. In CCS ’19; Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, London, UK, 11–15 November 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1955–1970. [Google Scholar] [CrossRef]
- Vielberth, M.; Bohm, F.; Fichtinger, I.; Pernul, G. Security Operations Center: A Systematic Study and Open Challenges. IEEE Access 2020, 8, 227756–227779. [Google Scholar] [CrossRef]
- Amthor, P.; Fischer, D.; Kühnhauser, W.E.; Stelzer, D. Automated Cyber Threat Sensing and Responding: Integrating Threat Intelligence into Security-Policy-Controlled Systems. In ARES ’19, Proceedings of the 14th International Conference on Availability, Reliability and Security, Canterbury, UK, 26–29 August 2019; Association for Computing Machinery: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
- Yang, W.; Lam, K.-Y. Automated Cyber Threat Intelligence Reports Classification for Early Warning of Cyber Attacks in Next Generation SOC. In Information and Communications Security; ICICS 2019. Lecture Notes in Computer Science; Zhou, J., Luo, X., Shen, Q., Xu, Z., Eds.; Springer: Berlin/Heidelberg, Germany, 2020; Volume 11999, pp. 145–164. [Google Scholar] [CrossRef]
- Baroni, P.; Cerutti, F.; Fogli, D.; Giacomin, M.; Gringoli, F.; Guida, G.; Sullivan, P. Self-Aware Effective Identification and Response to Viral Cyber Threats. In Proceedings of the 13th International Conference on Cyber Conflict (CyCon), Tallinn, Estonia, 25–28 May 2021; Jancarkova, T., Lindstrom, L., Visky, G., Zotz, P., Eds.; NATO CCD COE Publications. CCD COE: Tallinn, Estonia, 2021; Volume 2021, pp. 353–370. [Google Scholar] [CrossRef]
- Strickson, B.; Worsley, C.; Bertram, S. Human-Centered Assessment of Automated Tools for Improved Cyber Situational Awareness. In Proceedings of the 2023 15th International Conference on Cyber Conflict: Meeting Reality (CyCon), Tallinn, Estonia, 30 May–2 June 2023; pp. 273–286. [Google Scholar] [CrossRef]
- Islam, C.; Babar, M.A.; Croft, R.; Janicke, H. SmartValidator: A Framework for Automatic Identification and Classification of Cyber Threat Data. J. Network. Comput. Appl. 2022, 202, 103370. [Google Scholar] [CrossRef]
- Basyurt, A.S.; Fromm, J.; Kuehn, P.; Kaufhold, M.-A.; Mirbabaie, M. Help Wanted-Challenges in Data Collection, Analysis and Communication of Cyber Threats in Security Operation Centers. In Proceedings of the 17th International Conference on Wirtschaftsinformatik, Online. 21–23 February 2022; Available online: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85171997510&partnerID=40&md5=30a02b455898c7c2c9d2421d82606470 (accessed on 15 September 2023).
- Hughes, K.; McLaughlin, K.; Sezer, S. Dynamic Countermeasure Knowledge for Intrusion Response Systems. In Proceedings of the Irish Signals and Systems Conference (ISSC), Letterkenny, Ireland, 11–12 June 2020. [Google Scholar] [CrossRef]
- Gupta, N.; Traore, I.; de Quinan, P.M.F. Automated Event Prioritization for Security Operation Center Using Deep Learning. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 5864–5872. [Google Scholar] [CrossRef]
- Renners, L.; Heine, F.; Kleiner, C.; Rodosek, G.D. Adaptive and Intelligible Prioritization for Network Security Incidents. In Proceedings of the 2019 International Conference on Cyber Security and Protection of Digital Services (Cyber Security), Oxford, UK, 3–4 June 2019; pp. 1–8. [Google Scholar] [CrossRef]
- Van der Kleij, R.; Schraagen, J.M.; Cadet, B.; Young, H. Developing Decision Support for Cybersecurity Threat and Incident Managers. Comput. Secur. 2022, 113, 102535. [Google Scholar] [CrossRef]
- Ban, T.; Samuel, N.; Takahashi, T.; Inoue, D. Combat Security Alert Fatigue with AI-Assisted Techniques. In CSET ’21: Proceedings of the 14th Cyber Security Experimentation and Test Workshop, Virtual, 9 August 2021; Association for Computing Machinery: New York, NY, USA, 2021; Volume 21, pp. 9–16. [Google Scholar] [CrossRef]
- Neupane, S.; Ables, J.; Anderson, W.; Mittal, S.; Rahimi, S.; Banicescu, I.; Seale, M. Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities. IEEE Access 2022, 10, 112392–112415. [Google Scholar] [CrossRef]
- Goodall, J.R.; Ragan, E.D.; Steed, C.A.; Reed, J.W.; Richardson, D.; Huffer, K.; Bridges, R.; Laska, J. Situ: Identifying and Explaining Suspicious Behavior in Networks. IEEE Trans. Vis. Comput. Graph. 2019, 25, 204–214. [Google Scholar] [CrossRef] [PubMed]
- Ndichu, S.; Ban, T.; Takahashi, T.; Inoue, D. A Machine Learning Approach to Detection of Critical Alerts from Imbalanced Multi-Appliance Threat Alert Logs. In Proceedings of the IEEE International Conference on Big Data (Big Data), Orlando, FL, USA, 15–18 December 2021; pp. 2119–2127. [Google Scholar] [CrossRef]
- Ofte, H.J.; Katsikas, S. Understanding Situation Awareness in SOCs, a Systematic Literature Review. Comput. Secur. 2023, 126, 103069. [Google Scholar] [CrossRef]
- Akinrolabu, O.; Agrafiotis, I.; Erola, A. The Challenge of Detecting Sophisticated Attacks: Insights from SOC Analysts. In Proceedings of the 13th International Conference on Availability, Reliability and Security, Hamburg, Germany, 27–30 August 2018; ACM: Hamburg, Germany, 2018; pp. 1–9. [Google Scholar] [CrossRef]
- Yen, T.-F.; Oprea, A.; Onarlioglu, K.; Leetham, T.; Robertson, W.; Juels, A.; Kirda, E. Beehive: Large-Scale Log Analysis for Detecting Suspicious Activity in Enterprise Networks. In ACSAC ’13: Proceedings of the 29th Annual Computer Security Applications Conference, New Orleans, LA, USA, 9–13 December 2013; Association for Computing Machinery: New York, NY, USA, 2013; pp. 199–208. [Google Scholar] [CrossRef]
- Zhong, C.; Yen, J.; Lui, P.; Erbacher, R. Learning From Experts’ Experience: Toward Automated Cyber Security Data Triage. IEEE Syst. J. 2019, 13, 603–614. [Google Scholar] [CrossRef]
- Alahmadi, B.A.; Axon, L.; Martinovic, I. 99% False Positives: A Qualitative Study of SOC Analysts’ Perspectives on Security Alarms. In Proceedings of the 31st Usenix Security Symposium, Boston, MA, USA, 10–12 August 2022; Usenix—The Advanced Computing Systems Association: Berkeley, CA, USA, 2022. [Google Scholar]
- Liu, J.; Zhang, R.; Liu, W.; Zhang, Y.; Gu, D.; Tong, M.; Wang, X.; Xue, J.; Wang, H. Context2Vector: Accelerating Security Event Triage via Context Representation Learning. Inf. Softw. Technol. 2022, 146, 106856. [Google Scholar] [CrossRef]
- Gutzwiller, R.S.; Fugate, S.; Sawyer, B.D.; Hancock, P.A. The Human Factors of Cyber Network Defense. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Los Angeles, CA, USA, 26–30 October 2015; Volume 59, pp. 322–326. [Google Scholar] [CrossRef]
- Chung, M.-H.; Yang, Y.; Wang, L.; Cento, G.; Jerath, K.; Raman, A.; Lie, D.; Chignell, M.H. Implementing Data Exfiltration Defense in Situ: A Survey of Countermeasures and Human Involvement. ACM Comput. Surv. 2023, 55. [Google Scholar] [CrossRef]
- Sworna, Z.T.; Islam, C.; Babar, M.A. APIRO: A Framework for Automated Security Tools API Recommendation. ACM Trans. Softw. Eng. Methodol. 2023, 32. [Google Scholar] [CrossRef]
- Hassan, W.U.; Guo, S.; Li, D.; Chen, Z.; Jee, K.; Li, Z.; Bates, A. NoDoze: Combatting Threat Alert Fatigue with Automated Provenance Triage. In Proceedings of the 2019 Network and Distributed System Security Symposium, San Diego, CA, USA, 24–27 February 2019. [Google Scholar] [CrossRef]
- Happa, J.; Agrafiotis, I.; Helmhout, M.; Bashford-Rogers, T.; Goldsmith, M.; Creese, S. Assessing a Decision Support Tool for SOC Analysts. Digit. Threats: Res. Pract. 2021, 2, 1–35. [Google Scholar] [CrossRef]
- Afzaliseresht, N.; Miao, Y.; Michalska, S.; Liu, Q.; Wang, H. From Logs to Stories: Human-Centred Data Mining for Cyber Threat Intelligence. IEEE Access 2020, 8, 19089–19099. [Google Scholar] [CrossRef]
- Kurogome, Y.; Otsuki, Y.; Kawakoya, Y.; Iwamura, M.; Hayashi, S.; Mori, T.; Sen, K. EIGER: Automated IOC Generation for Accurate and Interpretable Endpoint Malware Detection. In ACSAC ’19: Proceedings of the 35th Annual Computer Security Applications Conference, San Juan, PR, USA, 9–13 December 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 687–701. [Google Scholar] [CrossRef]
- Dietrich, C.; Krombholz, K.; Borgolte, K.; Fiebig, T. Investigating System Operators’ Perspective on Security Misconfigurations. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security, Toronto, ON, Canada, 15–19 October 2018; ACM: Toronto, ON, Canada, 2018; pp. 1272–1289. [Google Scholar] [CrossRef]
- Oprea, A.; Li, Z.; Norris, R.; Bowers, K. MADE: Security Analytics for Enterprise Threat Detection. In ACSAC’18: Proceedings of the 34th Annual Computer Security Applications Conference, San Juan, PR, USA, 3–7 December 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 124–136. [Google Scholar] [CrossRef]
- Kinyua, J.; Awuah, L. Ai/Ml in Security Orchestration, Automation and Response: Future Research Directions. Intell. Autom. Soft Comp. 2021, 28, 527–545. [Google Scholar] [CrossRef]
- Van Ede, T.; Aghakhani, H.; Spahn, N.; Bortolameotti, R.; Cova, M.; Continella, A.; van Steen, M.; Peter, A.; Kruegel, C.; Vigna, G. DEEPCASE: Semi-Supervised Contextual Analysis of Security Events. In Proceedings of the 2022 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 23–25 May 2022; pp. 522–539. [Google Scholar] [CrossRef]
- Chen, C.; Lin, S.C.; Huang, S.C.; Chu, Y.T.; Lei, C.L.; Huang, C.Y. Building Machine Learning-Based Threat Hunting System from Scratch. Digital Threats 2022, 3, 1–21. [Google Scholar] [CrossRef]
- Erola, A.; Agrafiotis, I.; Happa, J.; Goldsmith, M.; Creese, S.; Legg, P.A. RicherPicture: Semi-Automated Cyber Defence Using Context-Aware Data Analytics. In Proceedings of the 2017 International Conference on Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA), London, UK, 19–20 June 2017; pp. 1–8. [Google Scholar] [CrossRef]
- Naseer, A.; Naseer, H.; Ahmad, A.; Maynard, S.B.; Masood Siddiqui, A. Real-Time Analytics, Incident Response Process Agility and Enterprise Cybersecurity Performance: A Contingent Resource-Based Analysis. Int. J. Inf. Manag. 2021, 59, 102334. [Google Scholar] [CrossRef]
- Andrade, R.O.; Yoo, S.G. Cognitive Security: A Comprehensive Study of Cognitive Science in Cybersecurity. J. Inf. Secur. Appl. 2019, 48. [Google Scholar] [CrossRef]
- Husák, M.; Sadlek, L.; Špaček, S.; Laštovička, M.; Javorník, M.; Komárková, J. CRUSOE: A Toolset for Cyber Situational Awareness and Decision Support in Incident Handling. Comput. Secur. 2022, 115, 102609. [Google Scholar] [CrossRef]
- Chamberlain, L.B.; Davis, L.E.; Stanley, M.; Gattoni, B.R. Automated Decision Systems for Cybersecurity and Infrastructure Security. In Proceedings of the 2020 IEEE Security and Privacy Workshops (SPW), San Francisco, CA, USA, 18–20 May 2020; pp. 196–201. [Google Scholar] [CrossRef]
- González-Granadillo, G.; González-Zarzosa, S.; Diaz, R. Security Information and Event Management (SIEM): Analysis, Trends, and Usage in Critical Infrastructures. Sensors 2021, 21, 4759. [Google Scholar] [CrossRef] [PubMed]
- Demertzis, K.; Tziritas, N.; Kikiras, P.; Sanchez, S.L.; Iliadis, L. The next Generation Cognitive Security Operations Center: Adaptive Analytic Lambda Architecture for Efficient Defense against Adversarial Attacks. Big Data Cogn. Comput. 2019, 3, 6. [Google Scholar] [CrossRef]
- Tilbury, J.; Flowerday, S. The Rationality of Automation Bias in Security Operation Centers. J. Inf. Syst. Secur. 2024, 20, 87–107. [Google Scholar]
- Janssen, C.P.; Donker, S.F.; Brumby, D.P.; Kun, A.L. History and Future of Human-Automation Interaction. Int. J. Hum.-Comput. Stud. 2019, 131, 99–107. [Google Scholar] [CrossRef]
- Haltofová, P.; Štěpánková, P. An Application of the Boston Matrix within Financial Analysis of NGOs. Procedia-Soc. Behav. Sci. 2014, 147, 56–63. [Google Scholar] [CrossRef]
- Du, M.; Li, F.; Zheng, G.; Srikumar, V. DeepLog: Anomaly Detection and Diagnosis from System Logs through Deep Learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3 November 2017; ACM: New York, NY, USA; pp. 1285–1298. [Google Scholar] [CrossRef]
- Ianni, M.; Masciari, E. SCOUT: Security by Computing OUTliers on Activity Logs. Comput. Secur. 2023, 132, 103355. [Google Scholar] [CrossRef]
- Van der Schyff, K.; Flowerday, S.; Lowry, P.B. Information Privacy Behavior in the Use of Facebook Apps: A Personality-Based Vulnerability Assessment. Heliyon 2020, 6, e04714. [Google Scholar] [CrossRef] [PubMed]
- Endsley, M.R.; Kaber, D.B. Level of Automation Effects on Performance, Situation Awareness and Workload in a Dynamic Control Task. Ergonomics 1999, 42, 462–492. [Google Scholar] [CrossRef] [PubMed]
- Tilbury, J.; Flowerday, S. Automation Bias and Complacency in Security Operation Centers. Computers 2024. Forthcoming. [Google Scholar]
Technology Type | Number of Articles |
---|---|
AI/ML | 29 |
Not Classified (simple AI/ML) | 12 |
Neural Networks | 7 |
Visualization Modules | 4 |
Natural Language Processing | 3 |
Deep Learning | 2 |
Reinforcement Learning | 1 |
IDS | 3 |
SIEM/SOAR | 3 |
Other | 5 |
N/A | 9 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tilbury, J.; Flowerday, S. Humans and Automation: Augmenting Security Operation Centers. J. Cybersecur. Priv. 2024, 4, 388-409. https://doi.org/10.3390/jcp4030020
Tilbury J, Flowerday S. Humans and Automation: Augmenting Security Operation Centers. Journal of Cybersecurity and Privacy. 2024; 4(3):388-409. https://doi.org/10.3390/jcp4030020
Chicago/Turabian StyleTilbury, Jack, and Stephen Flowerday. 2024. "Humans and Automation: Augmenting Security Operation Centers" Journal of Cybersecurity and Privacy 4, no. 3: 388-409. https://doi.org/10.3390/jcp4030020
APA StyleTilbury, J., & Flowerday, S. (2024). Humans and Automation: Augmenting Security Operation Centers. Journal of Cybersecurity and Privacy, 4(3), 388-409. https://doi.org/10.3390/jcp4030020