Abstract
Defending against attackers with unknown behavior is an important area of research in security games. A well-established approach is to utilize historical attack data to create a behavioral model of the attacker. However, this presents a vulnerability: a clever attacker may change its own behavior during learning, leading to an inaccurate model and ineffective defender strategies. In this paper, we investigate how a wary defender can defend against such deceptive attacker. We provide four main contributions. First, we develop a new technique to estimate attacker true behavior despite data manipulation by the clever adversary. Second, we extend this technique to be viable even when the defender has access to a minimal amount of historical data. Third, we utilize a maximin approach to optimize the defender’s strategy against the worst-case within the estimate uncertainty. Finally, we demonstrate the effectiveness of our counter-deception methods by performing extensive experiments, showing clear gain for the defender and loss for the deceptive attacker.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
In this paper, we focus on the one-shot game which only consists of a learning phase and planning phase—a commonly-used security game model in literature. Therefore, the deceptive attacker can simply play perfectly rationally in the planning phase after deceiving the defender in the learning phase. This model can also serve as the basis for repeated security games which involve multiple learning-planning rounds where the attacker plays deceptively in all rounds except the last round.
References
Albarran, S.E., Clempner, J.B.: A Stackelberg security Markov game based on partial information for strategic decision making against unexpected attacks. Eng. Appl. Artif. Intell. 81, 408–419 (2019)
Brown, A.L., Camerer, C.F., Lovallo, D.: To review or not to review? Limited strategic thinking at the movie box office. Am. Econ. J. Microecon. 4(2), 1–26 (2012)
Camerer, C.F., Ho, T.H., Chong, J.K.: A cognitive hierarchy model of games*. Q. J. Econ. 119(3), 861–898 (2004). https://doi.org/10.1162/0033553041502225
Fang, F., et al.: Deploying paws: field optimization of the protection assistant for wildlife security. In: IAAI 2016 (2016)
Gan, J., Guo, Q., Tran-Thanh, L., An, B., Wooldridge, M.: Manipulating a learning defender and ways to counteract. In: NIPS 2019 (2019)
Gan, J., Xu, H., Guo, Q., Tran-Thanh, L., Rabinovich, Z., Wooldridge, M.: Imitative follower deception in Stackelberg games. In: EC 2019 (2019)
Guo, Q., An, B., Bosansky, B., Kiekintveld, C.: Comparing strategic secrecy and Stackelberg commitment in security games. In: IJCAI (2017)
Hortaçsu, A., Luco, F., Puller, S.L., Zhu, D.: Does strategic ability affect efficiency? Evidence from electricity markets. AER 109(12), 4302–4342 (2019)
Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B.I., Tygar, J.D.: Adversarial machine learning. In: AISec (2011)
Kar, D., et al.: Cloudy with a chance of poaching: adversary behavior modeling and forecasting with real-world poaching data. In: AAMAS 2017 (2017)
Kingma, D.P.: Auto-encoding variational bayes. In: ICLR (2014)
Lowd, D., Meek, C.: Adversarial learning. In: ACM SIGKDD (2005)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks (2017)
McKelvey, R.D., Palfrey, T.R.: Quantal response equilibria for normal form games. In: Games and Economic Behavior (1995)
Nguyen, T.H., et al.: Capture: a new predictive anti-poaching tool for wildlife protection. In: AAMAS 2016, pp. 767–775 (2016)
Nguyen, T.H., Sinha, A., He, H.: Partial adversarial behavior deception in security games. In: IJCAI (2020)
Nguyen, T.H., Vu, N., Yadav, A., Nguyen, U.: Decoding the imitation security game: handling attacker imitative behavior deception. In: 24th European Conference on Artificial Intelligence (2020)
Nguyen, T.H., Wang, Y., Sinha, A., Wellman, M.P.: Deception in finitely repeated security games. In: AAAI 2019 (2019)
Peng, B., Shen, W., Tang, P., Zuo, S.: Learning optimal strategies to commit to. In: 33th AAAI Conference on Artificial Intelligence (2019)
Perrault, A., Wilder, B., Ewing, E., Mate, A., Dilkina, B., Tambe, M.: Decision-focused learning of adversary behavior in security games. CoRR abs/1903.00958 (2019). http://arxiv.org/abs/1903.00958
Price, R.: A useful theorem for nonlinear devices having Gaussian inputs. IEEE Trans. Inf. Theory 4, 69–72 (1958)
Sinha, A., Kar, D., Tambe, M.: Learning adversary behavior in security games: a PAC model perspective. In: AAMAS 2016 (2016)
Song, Y., et al.: Vital: visual tracking via adversarial learning. In: IEEE CVPR (2018)
Tambe, M.: Security and Game Theory: Algorithms, Deployed Systems, Lessons Learned. Cambridge University Press, Cambridge (2011)
Wilcox, R.: Applying Contemporary Statistical Techniques. Academic Press, Cambridge (2002)
Wright, J.R., Leyton-Brown, K.: Level-0 meta-models for predicting human behavior in games. In: EC 2014. ACM (2014). https://doi.org/10.1145/2600057.2602907
Yang, R., Kiekintveld, C., Ordonez, F., Tambe, M., John, R.: Improving resource allocation strategy against human adversaries in security games. In: IJCAI (2011)
Zhang, J., Wang, Y., Zhuang, J.: Modeling multi-target defender-attacker games with quantal response attack strategies. Reliab. Eng. Syst. Saf. 205 (2021)
Zhang, X., Zhu, X., Lessard, L.: Online data poisoning attack (2019)
Zhuang, J., Bier, V.M., Alagoz, O.: Modeling secrecy and deception in a multi-period attacker-defender signaling game. Eur. J. Oper. Res. 203, 409–418 (2010)
Acknowledgement
This work was supported by ARO grant W911NF-20-1-0344 from the US Army Research Office.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Appendix
A Appendix
1.1 A.1 Proof of Theorem 1
In order to prove this theorem, we introduce a series of Lemmas (3–6). For the sake of analysis, we denote by:
Intuitively, \(y^m_i\) is the empirical attack distribution estimated from the perturbed training data \(\mathcal {\hat{D}} = \{x^m_i, z^m_i\}\) and \(c^m\) is the normalization term. Also, \(\{y^m_i, c^m\}\) and \(\{z^m_i\}\) are interchangeable. That is, given \(\{y^m_i, c^m\}\), we can determine \(z^m_i = \frac{y^m_i}{c^m}\). We first present the Lemma 1 which determines the deception capability of the deceptive attacker:
Lemma 3
Given the true behavior \(\lambda ^{\text {true}}\) of the non-deceptive attackers and the attack ratio f, the deceptive space for the deceptive attacker is specified as follows:
That is, any deceptive \(\lambda \) that the defender learns has to be a part of a feasible solution \((\lambda , y^m_i, c^m)\) of the system (14–17). Conversely, given any feasible \((\lambda , y^m_i, c^m)\) satisfying (14–17), the deceptive attacker can make the defender learn \(\lambda \) by inducing the following perturbed data:
Proof
Equation (14) is simply the \(\mathtt {KKT}\) condition presented in the previous section with \(y^m_i\) and \(c^m\) substituted in. Similarly, the constraints (15–16) correspond to the constraints for the deception capability of the deceptive attacker in (5–6). Finally, the constraint (17) follows from the definition of \(y^m_i\) and ensures that \(\sum _i\frac{z^m_i}{\sum _j z^m_j} = 1\) and \(\frac{z^m_i}{\sum _j z^m_j} \le 1\).
According to Lemma 3, we now can prove Theorem 1 based on the characterization of the feasible solution domain of \(\lambda \) for the system (14–17). We denote by:
the LHS of (14). In addition, we denote by \(\mathbf {S} = \{(y^m_i, c^m)\) : conditions (15–17) are satisfied} the feasible region of \((y^m_i, c^m)\) which satisfy the conditions (15–17). In the following, we provide Lemmas 4 and 5 which specify the range of \(\mathcal {F}\) as a function of \(\lambda \). Essentially, if the value of \(\mathcal {F}\) contains the point zero, then \(\lambda \) is a feasible solution of the system (14–17). We will use this property to characterize the feasible region of \(\lambda \).
Lemma 4
Assume that, WLOG, \(U^a_1(x^m_1) \le U^a_2(x^m_2)\le \dots \le U^a_T(x^m_T)\) for all m. Given a \(\lambda \), the optimal solution to
is determined as follows:
Proof
First, \(\mathcal {F}(\lambda , \{y^m_i, c^m\})\) can be reformulated as:
Under our assumption that \(U^a_1(x^m_1) \le U^a_2(x^m_2)\le \dots \le U^a_T(x^m_T)\), we know that \([U^a_i(x^m_i) - U^a_T(x^m_T)]\) is a strictly non-positive term for all i. Thus, maximizing \(\mathcal {F}\) involves minimizing \(y^m_i\) when \(i < T\). From constraint (15), the minimum \(y^m_i\) for all i is \(n^m_i c^m\). This gives us \(y^m_i = n^m_i c^m\) when \(i < T\). From constraint (17), we know that this leaves us with \(y^m_i = 1 - c^m\sum _{i = 1}^{T-1} n^m_i\) when \(i=T\).
Finally, given this specification of \(\{y^m_i\}\), the optimization problem (18) is reduced to:
in which the objective function comprises of two terms: the first term does not depend on \(\{c^m\}\) and the second term is a decreasing function of \(c^m\) (since \(U^a_{T}(x^m_T) - U^a(\mathbf {x}^m,\lambda ) > 0\)). Therefore, it is maximized when \(c^m\) is minimized, which is \(c^m = \frac{1}{(f+1) \sum _i n^m_i}\), concluding the proof.
Lemma 5
Assume that, WLOG, \(U^a_1(x^m_1) \le U^a_2(x^m_2)\le \dots \le U^a_T(x^m_T)\) for all m. Given a \(\lambda \), the optimal solution to
is determined as follows:
The proof of Lemma 5 is similar. Finally, using Lemmas (4–5) and the approximation in Eq. 7, we obtain:
Observe that, given \(\lambda \), \(\mathcal {F}(\lambda , \cdot )\) is continuous in \(\{y^m_i, c^m\}\). Therefore, given a \(\lambda '\), if \(\mathcal {F}^{\max }(\lambda ') \ge 0\ge \mathcal {F}^{\min }(\lambda ')\), there must exist \(\{y^m_i, c^m\} \in \mathbf {S}\) such that \(\mathcal {F}(\lambda ', \{y^m_i, c^m\}) = 0\). In other words, \(\lambda '\) is a part of a feasible solution for (14–17). Conversely, if \(\mathcal {F}^{\max }(\lambda ') < 0\) or \(\mathcal {F}^{\min }(\lambda ') > 0\), it means \(\lambda '\) is not feasible for (14–17). Moreover, using Observation 1, we can infer that both \(\mathcal {F}^{\max }\) and \(\mathcal {F}^{\min }\) are continuous and decreasing in \(\lambda \). We obtain Lemma 6 which states that feasible solutions of (14–17) form an interval.
Lemma 6
Let us assume \(\lambda _1 < \lambda _2\) are two feasible solutions of (14–17). Then any \(\lambda \in [\lambda _1,\lambda _2]\) is also a feasible solution of the system.
Proof
Since \(\lambda _1\) and \(\lambda _2\) are feasible solutions of (14–17), we obtain the inequalities:
For any \(\lambda \in [\lambda _1,\lambda _2]\), since \(\mathcal {F}^{\max }\) and \(\mathcal {F}^{\min }\) are decreasing functions in \(\lambda \), the following inequality holds true:
which implies that \(\lambda \) is also a feasible solution for (14–17), concluding the proof.
Lemma 7 specifies the interval \([\lambda ^{\text {learnt}}_{\min },\lambda ^{\text {learnt}}_{\max }]\) of feasible \(\lambda \) values for (14–17).
Lemma 7
There exist \(\lambda ^{\text {learnt}}_{\max } \ge \lambda ^{\text {learnt}}_{\min }\) such that:
which means \(\lambda ^{\text {learnt}}_{\min }\) and \(\lambda ^{\text {learnt}}_{\max }\) are feasible solutions for (14–17) and any \(\lambda \notin [\lambda ^{\text {learnt}}_{\min },\lambda ^{\text {learnt}}_{\max }]\) is not a feasible solution for (14–17).
Proof
As noted before, \(\mathcal {F}^{\max }(\lambda )\) is a continuous and decreasing function in \(\lambda \). On the other hand, we have:
for all \(\lambda ^{\text {true}}\) since \(U^a(\mathbf {x}^m,\lambda ^{\text {true}} = +\infty ) = U^a_T(x^m_T)\) and \(U^a(\mathbf {x}^m,\lambda ^{\text {true}} = -\infty ) = U^a_1(x^m_1)\) is the highest and lowest expected utilities for the attacker among all targets , respectively, and by Observation 1, \(U^a(\mathbf {x}^m,\lambda ^{\text {true}})\) is increasing in \(\lambda ^{\text {true}}\). Since \(\mathcal {F}^{\max }(\lambda )\) is continuous, there must exist a value of \(\lambda ^{\text {learnt}}_{\max } \in (-\infty ,+\infty )\) such that \(\mathcal {F}^{\max }(\lambda ^{\text {learnt}}_{\max }) = 0\). The proof for \(\lambda ^{\text {learnt}}_{\min }\) is similar.
Finally, for any \(\lambda < \lambda ^{\text {learnt}}_{\min }\), we have \(\mathcal {F}^{\min }(\lambda ) > \mathcal {F}^{\min }(\lambda ^{\text {learnt}}_{\min }) = 0\) since \(\mathcal {F}^{\min }\) is decreasing in \(\lambda \). Similarly, for any \(\lambda > \lambda ^{\text {learnt}}_{\max }\), we have \(\mathcal {F}^{\max }(\lambda ) < \mathcal {F}^{\max }(\lambda ^{\text {learnt}}_{\max }) = 0\). Both imply that \(\lambda \) is not feasible, concluding our proof.
By combining Lemmas 3, 6, and 7, we obtain Theorem 1.
Proof of Corollary 1
Proof
Corollary 1 is deduced based on the monotonicity property of the attacker’s utility (Observation 1). When \(\lambda ^{\text {true}}_1 \le \lambda ^{\text {true}}_2\), we have \(U^a(\mathbf {x}^m;\lambda ^{\text {true}}_1)\le U^a(\mathbf {x}^m;\lambda ^{\text {true}}_2)\) for all m. Based on the relationship between \(U^a(\mathbf {x}^m;\lambda ^{\text {true}})\) and \(U^a(\mathbf {x}^m;\lambda ^{\text {learnt}}_{\max })\) presented in Theorem 1, we readily obtain \(\lambda ^{\text {learnt}}_{\max ,1}\le \lambda ^{\text {learnt}}_{\max ,2}\). Similarly, we have: \(\lambda ^{\text {learnt}}_{\min ,1}\le \lambda ^{\text {learnt}}_{\min ,2}\).
Proof of Corollary 2
Proof
We first prove (8). Let’s consider the true behavior parameters \(\lambda ^{\text {true}}_1\le \lambda ^{\text {true}}_2\). Based on Corollary 1, the corresponding optimal deception solutions have to belong to the deception ranges: \(\mathtt {DecAlter}(\lambda ^{\text {true}}_1)\in [\lambda ^{\text {learnt}}_{\min ,1},\lambda ^{\text {learnt}}_{\max ,1}]\) and \(\mathtt {DecAlter}(\lambda ^{\text {true}}_2)\in [\lambda ^{\text {learnt}}_{\min ,2},\lambda ^{\text {learnt}}_{\max ,2}]\) where \(\lambda ^{\text {learnt}}_{\min ,1}\le \lambda ^{\text {learnt}}_{\min ,2}\) and \(\lambda ^{\text {learnt}}_{\max ,1}\le \lambda ^{\text {learnt}}_{\max ,2}\). We have two cases:
The first case is when the deception ranges do not overlap, i.e., (\(\lambda ^1_{\max } < \lambda ^2_{\min }\)). In this case, it is apparent that \(\mathtt {DecAlter}(\lambda ^{\text {true}}_1) < \mathtt {DecAlter}(\lambda ^{\text {true}}_2)\).
The other case is when the ranges overlap (i.e., \(\lambda _1^{max} \ge \lambda _2^{min}\)). If the optimal deceptive value for one or both does not belong to the overlap, i.e., \(\mathtt {DecAlter}(\lambda ^{\text {true}}_1) < \lambda ^{\text {learnt}}_{\min ,2}\) and/or \(\mathtt {DecAlter}(\lambda ^{\text {true}}_2) > \lambda ^{\text {learnt}}_{\max ,1}\)), the result is clearly the same as in our previous case (\(\mathtt {DecAlter}(\lambda ^{\text {true}}_1) < \mathtt {DecAlter}(\lambda ^{\text {true}}_2)\)). On the other hand, if both values fall within the overlap, that is \(\lambda ^{\text {learnt}}_{\min ,2} \le \mathtt {DecAlter}(\lambda ^{\text {true}}_1),\mathtt {DecAlter}(\lambda ^{\text {true}}_2)\le \lambda ^{\text {learnt}}_{\max ,1}\), both will take on the same value (\(\mathtt {DecAlter}(\lambda ^{\text {true}}_1) = \mathtt {DecAlter}(\lambda ^{\text {true}}_2)\)). This is true because both deceptive values \(\mathtt {DecAlter}(\lambda ^{\text {true}}_1)\) and \(\mathtt {DecAlter}(\lambda ^{\text {true}}_2)\) are being optimized to maximize the same objective: the utility of the deceptive attacker (as shown in \(\mathtt {DecAlter}\)).
Finally, (9) can be easily deduced based on (8). Let’s consider \(\mathtt {DecAlter}(\lambda ^{\text {true}}_1) < \mathtt {DecAlter}(\lambda ^{\text {true}}_2)\). We can prove \(\lambda ^{\text {true}}_1 < \lambda ^{\text {true}}_2\) by contradiction. That is, we assume \(\lambda ^{\text {true}}_1 \ge \lambda ^{\text {true}}_2\). According to (8), it means \(\mathtt {DecAlter}(\lambda ^{\text {true}}_1) \ge \mathtt {DecAlter}(\lambda ^{\text {true}}_2)\), which is a contradiction.
Proof of Corollary 3
Proof
Corollary 3 is a direct result of Corollary 2. Indeed, since \(\lambda ^{\text {true}}_1 \le \lambda ^{\text {true}}\le \lambda ^{\text {true}}_2\), we obtain the inequality among optimal deception solutions \(\mathtt {DecAlter}(\lambda ^{\text {true}}_1)\le \mathtt {DecAlter}(\lambda ^{\text {true}})\le \mathtt {DecAlter}(\lambda ^{\text {true}}_2)\) as a result of Corollary 2. Therefore if \(\mathtt {DecAlter}(\lambda ^{\text {true}}_1) = \mathtt {DecAlter}(\lambda ^{\text {true}}_2)\), we obtain the optimal deception solution: \(\mathtt {DecAlter}(\lambda ^{\text {true}}) = \mathtt {DecAlter}(\lambda ^{\text {true}}_1)\).
Proof of Lemma 1
Proof
Corollary 2 says that the deception outcome \(\lambda ^{\text {learnt}} = \mathtt {DecAlter}(\lambda ^{\text {true}})\) is an increasing (not strict) function of \(\lambda ^{\text {true}}\), and additionally using Corollary 3, we can say that given some deception outcome \(\lambda ^{\text {learnt}}\), there exists (unknown) \(\lambda ^{\text {true}}_{\min },\lambda ^{\text {true}}_{\max }\) such that any \(\lambda ^{\text {true}} \in [\lambda ^{\text {true}}_{\min },\lambda ^{\text {true}}_{\max }]\) leads to the same outcome \(\lambda ^{\text {learnt}} = \mathtt {DecAlter}(\lambda ^{\text {true}})\). Any \(\lambda \) outside of the range \([\lambda ^{\text {true}}_{\min },\lambda ^{\text {true}}_{\max }]\) cannot lead to the deception outcome \(\lambda ^{\text {learnt}}\). Corollary 2 further implies that \(\lambda ^{\text {true}}_{\min }\) and \(\lambda ^{\text {true}}_{\max }\) are increasing functions of \(\lambda ^{\text {learnt}}\).
Proof of Lemma 2
Proof
Assume WLOG, \(U^a_1(x^m_1) \le U^a_2(x^m_2)\le \dots \le U^a_T(x^m_T)\). We claim that \(S(i,\lambda ^{\text {true}}) = \sum _{j=1}^i q_j(\mathbf {x}^m;\lambda ^{\text {true}})\) for \(T > i \ge 1\) is decreasing (not strictly) in \(\lambda ^{\text {true}}\), or in other words, the upper bound of the \(i^{th}\) segment is decreasing (not strictly) for all i except \(i=T\). This means that for any single fixed u value, increasing \(\lambda ^{\text {true}}\) implies that \(f_{\lambda ^{\text {true}}}(u)\) is also increasing (or stays same) because the upper bound of the interval that u lies in shifts downwards as \(\lambda ^{\text {true}}\) increases. \(f_{\lambda ^{\text {true}}}(u)\) increasing means a higher value target is chosen for attack. Thus, for fixed u, a higher \(\lambda ^{\text {true}}\) implies that the empirical distribution places more (or equal) attacks on higher utility targets and hence \(U^a(\mathbf {x}^m,E(u;\lambda ^{\text {true}}))\) increases (not strictly) with \(\lambda ^{\text {true}}\). Finally, to prove our claim at the start of the proof, we show that the derivative of \(S(i,\lambda ^{\text {true}})\) is non-positive everywhere. Indeed, its derivative is computed as follows:
decomposing the attacker utility function \(U^a(\mathbf {x}^m;\lambda ^{\text {true}})\), as follows:
As we know that \(U^a_1(x^m_1) \le U^a_2(x^m_2) \ldots \le U^a_T(x^m_T)\), the following inequality holds:
Using this we get:
Using the above in the derivative Eq. 28, we get that the derivative of \(S(i,\lambda ^{\text {true}})\) is non-positive, hence it is decreasing w.r.t. \(\lambda \), concluding our proof.
Supplemental Experiments. First, in Fig. 4, we examine the range \([\lambda ^{\text {true}}_{\min }, \lambda ^{\text {true}}_{\max }]\) that the defender learns. Figure 4a shows that the range increases w.r.t. the percentage of attacks controlled by the deceptive attacker. This is intuitive, as more manipulation gives more power to the deceptive attacker. Figure 4b displays how this range also increases with the ground truth \(\lambda ^{\text {true}}\) value of the non-deceptive attackers. As \(\lambda ^{\text {true}}\) increases, the deceptive attacker produces a larger uncertainty range.
Lastly, Figs. 6 through 5 are for 30-target games, and each corresponds to a previously discussed 20-target figure. We observe the same trends in both cases.
Experimental Details. All experiments were run on the same HPC cluster, on instances using dual E5-2690v4 processors (28 cores). Each process was allocated 16000 megabytes of RAM. Instances run Red Hat Enterprise Linux Server, version 7.8. The Matlab version used was R2018b.
All experiments used the L-Infinity norm with a value of 2 as a rejection threshold for non-deceptive attack samples. This is done to prevent outlying samples from compromising the binary search. Values between .5 and 5 for this metric were tested, along with the same value ranges for the L1 and L2 norms. This norm and value were shown to produce the best results, without drastically increasing the runtime of the algorithm.
Additionally, all experiments used a value of 0.05 as a tolerance multiplier within the binary search itself. This prevents the inherent inaccuracy of discrete attack samples from ruining binary search. For the sake of consistency, an initial random number generation seed of 1 was used across all experiments. After defender strategy generation and solving (\(\mathtt {DecAlter}\)), the binary search is run 10 times, each with a different random seed. The superset of all resulting ranges forms our final uncertainty set for \(\lambda ^{\text {true}}\).
The trials shown in Figs. 4a, 2a, 2c, 3, 6c, 5a, 5c, and 6 were conducted using a true lambda value of 0.4 and a resource/target ratio of 0.2. Those in Figs. 4b, 2b, 2d, 6d, 5d, and 5d utilized a deceptive attack percentage of 0.3, and a resource/target ratio of 0.2. Experiments in Figs. 3 and use deceptive attack percentage of 0.1, a true lambda value of 0.4, and a resource/target ratio of 0.2.
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Butler, A.R., Nguyen, T.H., Sinha, A. (2021). Countering Attacker Data Manipulation in Security Games. In: Bošanský, B., Gonzalez, C., Rass, S., Sinha, A. (eds) Decision and Game Theory for Security. GameSec 2021. Lecture Notes in Computer Science(), vol 13061. Springer, Cham. https://doi.org/10.1007/978-3-030-90370-1_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-90370-1_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-90369-5
Online ISBN: 978-3-030-90370-1
eBook Packages: Computer ScienceComputer Science (R0)