Abstract
The recent introduction of the big data paradigm and advancements in machine learning and deep mining techniques have made proof guidance and automation in interactive theorem provers (ITPs) an important research topic. In this paper, we provide a learning approach based on sequential pattern mining (SPM) for proof guidance in the PVS proof assistant. Proofs in a PVS theory are first abstracted to a computer-processable corpus. SPM techniques are then used on the corpus to discover frequent proof steps and proof patterns, relationships of proof steps / patterns with each other, dependency of new conjectures on already proved facts and to predict the next proof step(s). Obtained results suggest that the integration of SPM in proof assistants can be used to guide the proof process and in the development of proof tactics/strategies.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Theorem provers allow the formal development and verification of system properties that can be defined in appropriate logical formalisms. Automated (first-order) theorem provers (ATPs) deal with the development of computer programs that can automatically perform logical reasoning. However, first-order logic (FOL) lacks the expressibility power that is required to define complex systems with an infinite domain. On the other hand, higher-order logic (HOL) allows quantification over predicates and functions. HOL based theorem provers, also known as interactive theorem provers (ITPs), offer support for rich logical formalisms such as dependent and (co)inductive types as well as recursive functions, which enable ITPs to model complex systems. Today, these mechanical reasoning systems are used in verification projects that range from operating systems, compilers and hardware components to prove the correctness of large mathematical proofs such as the Feit-Thomson Theorem and the Kepler conjecture [22]. However, automatic reasoning in ITPs is still a hard problem due to undecidable algorithms and proof methods [20].
Unlike ATPs where the proof process is generally automatic, ITPs follow the user driven proof development process. The user guides the proof process by providing the proof goal and by applying proof commands and tactics to prove the goal. Generally, the user does lots of repetitive work to prove a non-trivial theorem (goal), which is laborious and consumes a large amount of time. Proof guidance and proof automation in ITPs are two extremely desirable features. ITPs now do have a large corpora of computer-understandable formalized knowledge [5, 19] in the form of proof libraries. In PVS, proof scripts for a particular theory are stored separately in a file that can also be considered as a proof corpus for the theorems and lemmas in that theory. Proof scripts of different theories can be combined together to develop a more complex corpora. These corpora play an important role in artificial intelligence based methods, such as concept matching, structure formation and theory exploration. The ongoing fast progress in machine learning and data mining made it possible to use these learning techniques on such corpora in guiding the proof search process, in proof automation and in developing proof tactics/strategies, as indicated in recent works [8, 9, 15,16,17, 21, 26].
In this paper, the focus is on proof guidance and premise selection in ITPs from the perspective of sequential pattern mining (SPM) techniques. SPM techniques are used in data mining to find interesting and useful patterns (information) that are hidden in large corpora of sequential data [14]. A particular proof goal in PVS depends on the specifications inside the theory and it can be completed with different combinations of proof commands, inference rules and decision procedures [30]. This makes it difficult to infer useful proof tactics and strategies from specific examples that can be applied more generally. Moreover, a proof corpus contains too much information, which makes it hard to carry out the brute force approach for proof search. However, there is the potential to identify useful and interesting hidden proof patterns in these corpora and relationships of such proof patterns with each other. With such information, SPM techniques can be used to investigate the dependency of new conjectures on already proved facts and to predict the next proof step(s) or pattern(s) for guiding the proof of a novel non-trivial theorem/lemma.
We present an SPM-based proof process learning approach for the PVS proof assistant. The basic idea is to convert the PVS proofs for a theory into a proof corpus that is suitable for learning. SPM techniques are applied on the corpus to find frequent proof steps and patterns that are used in the proofs. Moreover, relationships of a proof steps/patterns with each other are discovered through sequential rule mining. The learning approach is also used to find the relevance of the new conjectures with the proofs and the performance of some state-of-the-art prediction models are examined by training and testing them on the corpus to predict the next proof step(s). Besides PVS, the proposed approach can also be used to guide the proof process in other proof assistants. The ultimate goal is to develop proof tactics/strategies with useful patterns that can be invoked directly by the user in the proof development process.
The rest of this paper is organized as follows: Sect. 2 elaborates the SPM-based learning approach that is used to discover useful proof steps/patterns in the proof corpus, their relationship and prediction of next proof step(s). Evaluation of the proposed approach on a case study and obtained results are discussed in Sect. 3. Related work on using the machine learning and data mining techniques for automated reasoning in ATPs and ITPs is presented in Sect. 4. Finally, the paper is concluded with some future directions in Sect. 5. PVS dump files and SPM related data for this work can be found at [31].
2 Proof Corpus Mining with SPM
The structure of the SPM-based learning approach is shown in Fig. 1. It consists of two main parts:
-
1.
Development of proof corpus: PVS proof steps for theorems and lemmas in a theory are converted to a proof corpus, where each complete proof is abstracted to sequences of proof commands.
-
2.
Learning through SPM: SPM algorithms are used on the corpus to discover the common proof steps and patterns, relationships of proof steps/patterns with each other, dependency of new conjectures on already proved facts and prediction of next proof step(s). Each part is further elaborated next.
In general, data is assembled first, so that data mining algorithms can be used. To make the proof corpus suitable for learning, it should satisfy certain minimum requirements, such as:
-
It is stored in a computational and electronic form.
-
It contains many examples of proofs that offer diversity in kinds of proof steps. The corpus should have different proof steps so that useful proof patterns as well as the dependency of proof steps and prediction of next proof steps can be discovered.
-
It is transformed in a suitable abstraction, so that no meaningful information from the proofs is left out. For this, we use the “proof sequences to integers” abstraction, where each proof command is converted to a distinct positive integer. Such abstraction allows wide diversity and makes the approach more general in nature.
Besides the dump file that contains the specifications for a particular theory with imported libraries and proof scripts (collection of proof steps) for theorems/lemmas, PVS also saves the proof scripts for a theory in a separate proof file. These files contain proof commands with some other information related to PVS. After removing the redundant information from the proof files, the complete proof is a sequence of proof steps. In the following we present some concepts related to sequences in the context of this work.
Let \(PS= \{ps_1, ps_2, . . . , ps_m \}\) represent the set of proof commands. A proof steps set PSS is a set of proof commands, that is \(PSS \subseteq PS\). |PSS| denotes the set cardinality. PSS has a length k (called k-PSS) if it contains k proof commands, i.e., \(|PSS| = k\). For example, consider the set of PVS proof commands \(PS = \{\)skolem, flatten, inst?, split, beta, iff, assert\(\}\). The set \(\{\)skolem, flatten, assert\(\}\) is a proof steps set that contains three proof commands. For the purpose of processing commands in some order, a total order relation on proof commands is assumed to exist (e.g. the lexicographical order), denoted as \(\prec \).
A proof sequence is a list of proof steps sets \(S = \langle PSS_1 , PSS_2 , ..., PSS_n \rangle \), such that \(PSS_i \subseteq PSS\) (\(1 \le i \le n\)). For example, \(\langle \{\)skolem, flatten\(\}\), \(\{\)inst?\(\}\), \(\{\)beta, iff\(\}\), \(\{\)assert\(\}\rangle \) is a proof sequence which has four proof steps sets being used to prove a theorem. A proof corpus PC is a list of proof sequences \(PC = \langle S_1 , S_2 ,..., S_p \rangle \), where each sequence has an identifier (ID). For example, Table 1 shows a PC that has five proof sequences with IDs 1, 2, 3, 4 and 5.
The final step is to convert the proof sequences into sequences of integers to bring the corpus in a more suitable format for SPM techniques. In the final corpus, each line represents a proof sequence that was used for the proof of a theorem/lemma. Each proof command in the sequence is replaced by a positive integer. For example, the proof command skosimp is replaced by 1. Moreover, proof commands are separated with a single space followed by a negative integer -1. The negative integer -2 appears at the end of each line to indicate the end of a proof sequence. It is to note that the same integers are used for similar proof commands such as (inst?() and (inst fnum constants), and (skosimp) and (skosimp*). This makes the learning process more general in nature and can be used for other PVS theories, in particular for the PVS library.
A proof sequence \(S_\alpha = \langle \alpha _1 , \alpha _2 , ..., \alpha _n \rangle \) is present or contained in another proof sequence \(S_\beta = \langle \beta _1, \beta _1, ..., \beta _m \rangle \) iff there exist integers \(1 \le i_1< i_2<...< i_n \le m\), such that \(\alpha _1 \subseteq \beta _{i1}, \alpha _2 \subseteq \beta _{i2}, .. ., \alpha _n \subseteq \beta _{im}\) (denoted as \(S_\alpha \sqsubseteq S_\beta \)). If \(S_\alpha \) is present in \(S_\beta \), then \(S_\alpha \) is a subsequence of \(S_\beta \). In SPM, various measures are used to evaluate the importance and interestingness of a subsequence. The support measure is used by most SPM techniques. The support of \(S_\alpha \) in PC is the total number of sequences (S) that contain \(S_\alpha \), and is represented by \(sup(S_\alpha )\):
SPM is an enumeration problem that aims to find all the frequent subsequences in a sequential dataset. A sequence S is a frequent sequence (also called sequential pattern) iff \({sup(S)} \ge {minsup}\), where minsup (minimum support) is the threshold being determined by the user. A sequence containing n items (proof commands in this work) in a corpus can have up to \(2^{n}-1\) distinct subsequences. This makes the naive approach to calculate the support of all possible subsequences infeasible for most corpora. Several efficient algorithms have been developed in recent years that do not explore all the search space for all possible subsequences.
All SPM algorithms investigate the patterns search space with two operations: s-extensions and i-extensions. A sequence \(S_\alpha = \langle \alpha _1 , \alpha _2 , ..., \alpha _n \rangle \) is a prefix of another sequence \(S_\beta = \langle \beta _1, \beta _1, ..., \beta _m \rangle \), if \(n < m\), \(\alpha _1\) = \(\beta _1\), \(\alpha _2\) = \(\beta _2\) , ..., \(\alpha _{n-1}\) = \(\beta _{n-1}\), where \(\alpha _n\) is equal to the first \(|\alpha _n|\) items of \(\beta _n\) according to the \(\prec \) order. Note that SPM algorithms follow a specific order \(\prec \) so that the same potential patterns are not considered twice and the choice of the order \(\prec \) does not affect the final result produced by SPM algorithms. A sequence \(S_\beta \) is an s-extension of a sequence \(S_\alpha \) for an item x if \(S_\beta \) = \(\langle \alpha _1 , \alpha _2 , ..., \alpha _n, \{x\}\rangle \). Similarly, for an item x, a sequence \(S_\gamma \) is an i-extension of \(S_\alpha \) if \(S_\gamma \) = \(\langle \alpha _1 , \alpha _2 , ..., \alpha _n \cup \{x\}\rangle \). SPM algorithms either employ a breadth-first search or a depth-first search. In the following, a brief description of state-of-the-art SPM algorithms is presented.
The TKS (Top-k Sequential) algorithm finds the top-k sequential patterns in a corpus, where k is set by the user and it represents the number of sequential patterns to be discovered by the algorithm. TKS employs the basic candidate generation procedure of SPAM and vertical database representation. With vertical representation, support for patterns can be calculated without performing costly database scans. This makes vertical algorithms to perform better on dense or long sequences. TKS also uses several strategies for search space pruning and depends on the PMAP (Precedence Map) data structure to avoid costly operations of bit vector intersection. Another SPM algorithm is the CM-SPAM algorithm that performs a depth-first search to discover frequent sequential patterns in a corpus. The CMAP (Co-occurrence MAP) data structure is used in CM-SPAM to store co-occurrence of item information. A generic pruning mechanism that is based on CMAP is used for pruning the search space with vertical database representation, to efficiently discover sequential patterns. More detail on TKS and CM-SPAM can be found in [10, 11] respectively.
Sequential patterns that appear frequently in a corpus with low confidence are worthless for decision making or prediction. Sequential rules discover patterns by considering not only their support but also their confidence. A sequential rule \(X \rightarrow Y\) is a relationship between two PSSs \(X, Y \subseteq PS\), such that \(X \cap Y = \emptyset \) and \(X, Y \ne \emptyset \). The rule \(r: X \rightarrow Y\) means that if items of X occur in a sequence, items of Y will occur afterward in the same sequence. X is contained in \(S_{\alpha }\) (written as \(X \sqsubseteq S_{\alpha }\)) iff \(X \subseteq \bigcup _{i=1}^{n} \alpha _{i}\). A rule \(r: X \rightarrow Y\) is contained in \(S_\alpha \) (\(r \sqsubseteq S_{\alpha }\)) iff there exists an integer k such that \(1 \le k < n\), \(X \subseteq \bigcup _{i=1}^{k} \alpha _i\) and \(Y \subseteq \cup _{i=k+1}^{n} \alpha _i\). The confidence of r in PC is defined as:
The support of r in PC is defined as:
A rule r is a frequent sequential rule iff \(sup_{PC} (r) \ge minsup\) and r is a valid sequential rule iff it is frequent and \(conf_{PC}(r) \ge minconf\), where the thresholds minsup, minconf \(\in [0, 1]\) are set by the user. Mining sequential rules in a corpus deals with finding all the valid sequential rules. For this, the ERMiner (Equivalence class based sequential Rule Miner) algorithm [12] is used. It relies on a vertical database representation and represents the search space of rules using equivalence classes of rules having the same antecedent or consequent. It employs two operations (left and right merges) to explore the search space of frequent sequential rules, where the search space is pruned with the Sparse Count Matrix (SCM) technique, which makes ERMiner more efficient than other sequential rule finding algorithms.
The statistical Naive Bayes (NB) classifier [32] is based on Bayes’ theorem and is used to compute the probability of using the proof p in the corpus to prove a new conjecture c. A conjecture is a proposition or statement that has not been proved yet, but is thought to be true. The probability is based on the fact that some p are already used before in the proof of conjectures similar to c. As each p contains a set of proof steps, the conditional probability P(PSS|c) estimates the relevance of PSS for c. The conditional probability is computed and multiplied to get the overall probability for c.
The Compact Prediction Tree+ (CPT+) model is used to predict the next proof step(s) [18]. CPT+ implements two strategies for compression to reduce the CPT size and one strategy for the reduction of prediction time. In the training phase, CPT+ takes a set of training sequences as input and generates three data structures: a prediction tree, a lookup table and an inverted index. These three structures are built incrementally by considering the sequence one by one during training. For a proof sequence \(S_\alpha \) of n elements, the suffix of \(S_\alpha \) of size y where \(1 \le y \le n\) is defined as \(P_{y}(S_{\alpha }) = \langle \alpha _{n-y+1}, \alpha _{n-y+2},..., \alpha _{n} \rangle \). Predicting the next proof steps of \(S_\alpha \) is done by finding those sequences that are similar to \(P_{y}(S_{\alpha })\) in any order. For prediction, CPT+ uses the consequent of each sequence that is similar to \(S_\alpha \). Let \(S_\beta \) be another proof sequence similar to \(S_\alpha \). The consequent of \(S_\beta \) with respect to \(S_\alpha \) is the longest subsequence \(\langle \beta _v, \beta _{v+1}, ..., \beta _{m} \rangle \) of \(S_\beta \) such that \(\bigcup _{k=1}^{v-1} \{\beta _{k}\} \subseteq P_{y}(S_{\alpha })\) and \(1 \le v \le m\). Each proof command discovered in the consequent of a similar proof sequence of \(S_\alpha \) is stored in the count table (CT) data structure. CPT+ in last returns the most supported proof step(s) in the CT as prediction(s).
3 Experiments
All the following experiments are performed on an HP laptop with a fifth generation Core i5 processor and 8 GB RAM. For the case study, we select our previous work [29], where PVS is used for the analysis and verification of Reo connectors composed of untimed and timed channels. The main reason to select the proofs in [29] is that we are extending the formalization framework to cover the probabilistic [3] and stochastic [4] behavior of Reo connectors. The approach not only enabled us to comprehend the proof process for probabilistic connectors but also can be considered far effective in providing the necessary guidance to attain the proofs of such connectors.
SPMF data mining library, developed in JAVA, is used to analyze the proof corpus. It is an open-source and cross-platform framework that is specialized in pattern mining tasks. SPMF offers implementations for more than 150 data mining algorithms. More detail on SPMF can be found in [13].
3.1 Case Study
Reo [2] is a channel-based exogenous coordination language that allows the construction of complex connectors from primitive channels through compositional operators. Connectors in Reo provide the protocol for controlling and organizing the communication, synchronization and cooperation between components. Each channel in Reo has two channel ends type source or sink. The connector behavior in PVS is formalized by means of data-flows on its sink and source nodes, which are essentially infinite sequences. In PVS, record structure named TD is used to represent the timed-data sequences on sink and source nodes, where time is defined as a positive real number (\(\mathbb {R^+}\)) and data is defined as a positive type. Three main composition operators (flow through, replicate and merge) are used in Reo for connector construction. Flow through and replicate operators can be achieved explicitly in PVS, whereas merge operator is defined inductively.
We omit the details of Reo connector modeling in PVS due to the length limitation. Interested readers can find more details in [29, 31]. Here, one example is provided to show how connectors are modeled and how properties for connectors are proved in PVS.
Example 1
Figure 2 shows a connector which consists of one Synchronous (Sync) channel (AB) and one FIFO1 channel (BC), that accepts data items at source node A and stores the data items in the buffer, before dispensing them through the sink node C. The mixed node B allows the data items to move from the Sync channel to FIFO1 channel without any change.
Let a, b, c denote the time sequences when the corresponding data sequence flows through nodes A, B and C respectively. According to the semantics of Sync and FIFO1 channels, \(a = b < c\). Let \(\alpha \), \(\gamma \) represent the data sequences being observed at nodes A and C respectively, and \(\alpha = \gamma \). In PVS, these results are proved with the following theorem.
Theorem 1
Sync(A,B) \(\wedge \) Fifo1(B,C) \(\Rightarrow \) Tle(A,C) \(\wedge \) Teq(A,B) \(\wedge \) Deq(A,C)
Proof
PVS prover is based on sequent calculus and it can build a graphical proof tree for a proof goal. The nodes in the proof tree are sequents. PVS proof commands may divide the main goal into sub-goals (tree leaves). The proof is completed when all the sub-goals are proved. The proof steps for Theorem 1 are shown in Fig. 3.
3.2 Results and Discussion
Results obtained by applying SPM algorithms on the proof corpus are discussed in this section.
The TKS algorithm is first applied on the corpus to find hidden proof steps and patterns. TKS takes a corpus and a user specified parameter k as input and returns the top-k most frequent patterns as output. The parameter k is used in place of minusp threshold due to the following two reasons:
-
1.
Selection of a proper minimum support to discover the desired amount of useful patterns has an effect on the performance of SPM algorithms.
-
2.
The minimum support fine-tuning process is hard and time consuming.
To overcome these drawbacks, the parameter k puts a bound on the total number of patterns to be discovered by the algorithm. Some proof patterns discovered by the TKS algorithm with varying length are shown in Table 2. The column Sup indicates the occurrence count of each pattern in the corpus. Table 3 provides some useful information related to frequent occurrence of proof steps and patterns that are used in the verification of Reo channels and connectors.
Unlike TKS, the CM-SPAM algorithm offers the minsup threshold. Table 3 lists some of the most useful frequent proof patterns in the corpus which are extracted with the CM-SPAM algorithm. The first six proof patterns appear in at least \(50 \%\) of the sequences in the corpus. The next six patterns appear in at least \(40 \%\) of the sequences and last two patterns appear in at least \(10 \%\) of the sequences. Discovered patterns with the CM-SPAM algorithm are almost similar to the results obtained with the TKS algorithm. As the outputs of TKS and CM-SPAM are very similar, the performance of TKS with CM-SPAM is compared in terms of execution time and memory used. The CM-SPAM is fine tuned with the minsup threshold to generate the k proof patterns. For optimal support, TKS execution time is very similar to CM-SPAM. Similarly, TKS showed excellent scalability. These results, which are consistent with [11], are important because finding the top-k sequential proof patterns is a harder problem than mining all proof patterns, as the minsup requires dynamic raising that starts from 0.
Figure 4 shows the relationships between proof steps/patterns that are discovered through sequential rule mining with the ERMiner algorithm. The confidence (misconf) threshold is set to \(70\%\), which means that rules have a confidence of at least \(70\%\) (a rule \(X\rightarrow Y\) has a confidence of \(70\%\) if the set of proof commands in X is followed by the set of proof commands in Y at least \(70\%\) of the times when X appears in a proof sequence). The value above the arrow is for the support and the value below the arrow indicates the confidence (probability). For example, the first rule in Fig. 4 indicates that \(94.7\%\) of the time, the assert command is followed after the expand command. With the ERMiner algorithm, we found some interesting relationship and dependency of proof steps/patterns with each other. Results obtained so far indicate that the total number of proof steps in each proof (abstraction simplicity) has a direct correlation on the efficiency of SPM algorithms.
In [7], common proof patterns are found in the Isabelle proofs with a variable length Markov Chain. Proofs are represented in a tree structure format, which are linearized, such as the proofs are split into separate sequences and given weights accordingly. However, linearization means losing any important connections (information) between different branches in the proofs due to which interesting patterns may well be lost. In this work, the proof corpus contains all the necessary important information for pattern discovery and SPM algorithms, which are more user-friendly and work efficiently on the corpus.
The NB classifier implemented in SPMF is used to check the dependency of new conjectures on already proved proofs. For that, the classifier is trained on the proofs presented in the corpus. We then provide new conjectures from our ongoing work on probabilistic Reo connectors. In the output, the classifier successfully classified the new conjectures, which shows that the proofs can be used in guiding the proof process of new conjectures. Moreover, for conjectures taken from PVS libraries, the classifier was unable to classify, which means that their proofs are not dependent on the facts present in the corpus. NB classifiers are also used in [23] for computing the proof dependencies for new conjectures from the theorems taken from the Coq repository. Obtained results are presented with measures such as precision, recall and rank. In SPMF, the NB implementation only provides the binary type output for classification and does not provide information for the measures. In future, we would like to enhance the implementation of NB to provide statistics about the measures.
Predicting the next proof steps for the new conjecture or unproved theorem/lemma has gained increased importance in last few years. The CPT+ model is used for predicting the next proof steps. The model is first trained on the proof sequences in the corpus. The prediction model is then used to predict the next proof step for a new proof sequence. Prediction of the next proof step is based on the scores calculated by the model for each proof command. For example, CPT+ predicted assert for the proof sequence <flatten, split>. The statistics and scores assigned by the model to each proof step for the previous example are listed in Table 4. It is to note here that a higher score is considered better for CPT+.
To check the efficiency of CPT+, we compared its performance with various other state-of-the-art prediction models such as Dependency Graph (DG), Transition Directed Acyclic Graph (TDAG), CPT (the predecessor of CPT+), AKOM (All-K-Order-Markov) and LZ78. Each model is trained and tested with 10-fold cross-validation. The cross-validation technique characterizes the performance of each model by evaluating the generalization of independent set over statistical results provided by the model. In k-fold cross-validation, the dataset is randomly partitioned into k sub-datasets. One sub-dataset is then selected as validation set for model testing and the remaining \(k-1\) sub-datasets are used for model training. This process is continued for k times and each sub-dataset is used exactly once as the validation set. Single estimation of the result is obtained by taking the average of k results. The main reason to use 10-fold cross-validation is to achieve low variance in each run. Obtained results for various prediction models are shown in Table 5.
For evaluation of prediction models, three measures are used. The result of a prediction can be:
-
a success if the model predicts accurately,
-
a failure if the model predicts inaccurately and
-
no match if the model cannot perform the prediction.
The overall performance of each model is measured through its accuracy, which is the total number of successful predictions performed by the model against the total number of test sequences. Two other measures training time and testing time are not included in the results here as all the models take almost the same time for training and testing. CPT+ achieved higher accuracy (79.412) as compared to other prediction models. CPT has a higher success rate than CPT+, but the higher no match rate makes its accuracy lower than CPT+. Markov based prediction models DG achieved the lowest success rate and highest failure rate, while TDAG and AKOM have the same results for all four parameters.
4 Related Work
Using machine learning and data mining in theorem provers is not a new idea and they are used mainly for three tasks: premise selection, strategy selection and internal guidance. Support vector machines (SVMs) and Gaussian processes (GPs) were used in [6] for selection of a good heuristics in the E theorem prover. In [27], kernel methods were applied for strategy scheduling and strategy finding problems in three ATPs: E, Satallax and LEO-II. Deep networks have been used in [28] for internal guidance in E, where deep learning based proof guidance increases the total number of theorems proved while reducing the average number of proof search steps. Moreover, internal proof guidance methods based on the watchlist technique were developed in [17] for E prover. A proof search guidance technique based on leanCoP was presented in [24] to guide the tableaux proof search. In [33], GRU networks were used in MetaMath for guiding the proof search of a tableau style proof process. Monte-Carlo tree search methods added with a connection tableau were studied and implemented in leanCoP in [9] for guiding the proof search. A new theorem proving algorithm (implemented in rlCoP) was recently presented in [26] for proof guidance that uses Monte-Carlo simulations with reinforcement iterations. rlCoP showed better performance than leanCoP in solving unseen problems when trained on a large corpus.
For HOL based theorem provers, variable length Markov models (VLMM) technique has been applied in [7] on a proof corpus of the Isabelle prover to identify sequences of proof steps and these sequences were used to form tactics. Particle swarm optimization and NB based techniques were proposed in [8] to internally guide the given-clause algorithm in the Satallax prover. Premise selection techniques were developed in [23] for the Coq system, where machine learning methods are compared on Coq proofs taken from the CoRN repository. Recurrent and convolutional neural networks were used in [21] for premise selection in the Mizar prover. A corpus of proofs was constructed in [1] for training a kernalized classifier with bag-of-word features that show the term occurrences in a vocabulary. Premise selection based on machine learning and automated reasoning for the HOL4 is provided in [15] by adapting HOL(y)Hammer [25]. A learning based automation technique was recently developed in [16] called TacticToe on top of the HOL4 for automation of theorems proofs. The HolStep dataset, introduced in [22], consists of 10 K conjectures and 2M statements to develop new machine learning based proof strategies.
5 Conclusion
The proof development process in ITPs requires heavy interactions between users and the proof assistants, where users are forced to do lots of repetitive work which makes the proving process a more time consuming activity. To make the proof process simpler and for proof guidance, the SPM-based learning approach is adopted in this work to find the frequent proof steps/patterns and their relationship in a PVS theory. NB classifier is used to check the dependency of new conjectures on the already proved proofs. Moreover, the performance of some models for the prediction of next proof step(s) is compared, where CPT+ performs better than other models. Some interesting proof patterns are found with SPM and obtained results show that the number of proof steps in each proof has a direct correlation on the efficiency of SPM algorithms.
There are several directions of future work. First, we would like to use the SPM algorithms on the corpora of proof steps for theories included in PVS library, which contains thousands of theorems. This will enable us to develop a more general learning approach for the proofs of new conjunctures. Another direction is to use evolutionary and heuristics techniques such as genetic programming and particle swarm optimization for the development of PVS strategies from frequently occurring proof patterns. Some other future work includes the implementation of some famous classifiers such as k-nearest neighbor in SPMF and enhancing the implementation of NB to provide statistics about common measures such as precision, recall and f-measure. Last but not the least, using SPM algorithms on the dataset provided by [22] is in our future plan as well.
References
Alama, J., Heskes, T., Kühlwein, D., Tsivtsivadze, E., Urban, J.: Premise selection for mathematics by corpus analysis and kernel methods. J. Autom. Reasoning 52(2), 191–213 (2014)
Arbab, F.: Reo: a channel-based coordination model for component composition. Math. Struct.Comput. Sci. 14(3), 329–366 (2004)
Baier, C.: Probabilistic models for Reo connector circuits. J. Univ. Comput. Sci. 11(10), 1718–1748 (2005)
Baier, C., Wolf, V.: Stochastic reasoning about channel-based component connectors. In: Ciancarini, P., Wiklicky, H. (eds.) COORDINATION 2006. LNCS, vol. 4038, pp. 1–15. Springer, Heidelberg (2006). https://doi.org/10.1007/11767954_1
Blanchette, J.C., Haslbeck, M., Matichuk, D., Nipkow, T.: Mining the archive of formal proofs. In: Kerber, M., Carette, J., Kaliszyk, C., Rabe, F., Sorge, V. (eds.) CICM 2015. LNCS (LNAI), vol. 9150, pp. 3–17. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-20615-8_1
Bridge, J.P., Holden, S.B., Paulson, L.C.: Machine learning for first-order theorem proving - learning to select a good heuristic. J. Autom. Reasoning 53(2), 141–172 (2014)
Duncan, H.: The use of data-mining for the automatic formation of tactics. Ph.D. thesis, University of Edinburgh, UK (2007)
Färber, M., Brown, C.: Internal guidance for satallax. In: Olivetti, N., Tiwari, A. (eds.) IJCAR 2016. LNCS (LNAI), vol. 9706, pp. 349–361. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40229-1_24
Färber, M., Kaliszyk, C., Urban, J.: Monte carlo tableau proof search. In: de Moura, L. (ed.) CADE 2017. LNCS (LNAI), vol. 10395, pp. 563–579. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63046-5_34
Fournier-Viger, P., Gomariz, A., Campos, M., Thomas, R.: Fast vertical mining of sequential patterns using co-occurrence information. In: Tseng, V.S., Ho, T.B., Zhou, Z.-H., Chen, A.L.P., Kao, H.-Y. (eds.) PAKDD 2014. LNCS (LNAI), vol. 8443, pp. 40–52. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-06608-0_4
Fournier-Viger, P., Gomariz, A., Gueniche, T., Mwamikazi, E., Thomas, R.: TKS: efficient mining of top-k sequential patterns. In: Motoda, H., Wu, Z., Cao, L., Zaiane, O., Yao, M., Wang, W. (eds.) ADMA 2013. LNCS (LNAI), vol. 8346, pp. 109–120. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-53914-5_10
Fournier-Viger, P., Gueniche, T., Zida, S., Tseng, V.S.: ERMiner: sequential rule mining using equivalence classes. In: Blockeel, H., van Leeuwen, M., Vinciotti, V. (eds.) IDA 2014. LNCS, vol. 8819, pp. 108–119. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-12571-8_10
Fournier-Viger, P., et al.: The SPMF open-source data mining library version 2. In: Berendt, B., et al. (eds.) ECML PKDD 2016. LNCS (LNAI), vol. 9853, pp. 36–40. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46131-1_8
Fournier-Viger, P., Lin, J.C.W., Kiran, R.U., Koh, Y.S., Thomas, R.: A survey of sequential pattern mining. Data Sci. Pattern Recogn. 1(1), 54–77 (2017)
Gauthier, T., Kaliszyk, C.: Premise selection and external provers for HOL4. In: Proceedings of CPP 2015, pp. 48–57. ACM (2015)
Gauthier, T., Kaliszyk, C., Urban, J.: TacticToe: learning to reason with HOL4 tactics. In: Proceedings of LPAR 2017. EPiC Series in Computing, vol. 46, pp. 125–143 (2017)
Goertzel, Z., Jakubův, J., Schulz, S., Urban, J.: ProofWatch: watchlist guidance for large theories in E. In: Avigad, J., Mahboubi, A. (eds.) ITP 2018. LNCS, vol. 10895, pp. 270–288. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-94821-8_16
Gueniche, T., Fournier-Viger, P., Raman, R., Tseng, V.S.: CPT+: decreasing the time/space complexity of the compact prediction tree. In: Cao, T., Lim, E.-P., Zhou, Z.-H., Ho, T.-B., Cheung, D., Motoda, H. (eds.) PAKDD 2015. LNCS (LNAI), vol. 9078, pp. 625–636. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-18032-8_49
Harrison, J., Urban, J., Wiedijk, F.: History of interactive theorem proving. In: Computational Logic. Handbook of the History of Logic, vol. 9, pp. 135–214. Elsevier (2014)
Hasan, O., Tahar, S.: Formal verification methods. In: Encyclopedia of Information Science and Technology, 3rd edn, pp. 7162–7170. IGI Global (2015)
Irving, G., Szegedy, C., Alemi, A.A., Eén, N., Chollet, F., Urban, J.: Deepmath - Deep sequence models for premise selection. In: Proceedings of NIPS 2016, pp. 2243–2251. ACM (2016)
Kaliszyk, C., Chollet, F., Szegedy, C.: Holstep: a machine learning dataset for higher-order logic theorem proving. Proc. ICLR 2017, 1–12 (2017)
Kaliszyk, C., Mamane, L., Urban, J.: Machine learning of Coq proof guidance: first experiments. In: Proceedings of SCSS 2014. EPiC Series in Computing, vol. 30, pp. 27–34 (2014)
Kaliszyk, C., Urban, J.: FEMaLeCoP: fairly efficient machine learning connection prover. In: Davis, M., Fehnker, A., McIver, A., Voronkov, A. (eds.) LPAR 2015. LNCS, vol. 9450, pp. 88–96. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-48899-7_7
Kaliszyk, C., Urban, J.: Hol(y)Hammer: Online ATP service for HOL light. Math. Comput. Sci. 9(1), 5–22 (2015)
Kaliszyk, C., Urban, J., Michalewski, H., Olsák, M.: Reinforcement learning of theorem proving. Proc. NeurIPS 2018, 8836–8847 (2018)
Kühlwein, D., Urban, J.: MaLeS: a framework for automatic tuning of automated theorem provers. J. Autom. Reasoning 55(2), 91–116 (2015)
Loos, S.M., Irving, G., Szegedy, C., Kaliszyk, C.: Deep network guided proof search. In: Proceedings of LPAR 2017. EPiC Series in Computing, vol. 46, pp. 85–105 (2017)
Nawaz, M.S., Sun, M.: Reo2PVS: formal specification and verification of component connectors. In: Proceedings of SEKE 2018, pp. 391–396. KSI Research Inc. (2018)
Owre, S., Shankar, N., Rushby, J.M., Stringer-Calvert, D.W.J.: PVS system Guide, PVS prover Guide. PVS language reference. Technical report, SRI International, November 2001
PVS and SPM data. https://github.com/saqibdola/SPM-in-PVS
Russell, S.J., Norvig, P.: Artificial Intelligence - A Modern Approach, 3rd edn. Pearson Education, Upper Saddle River (2010)
Whalen, D.. Holophrasm: a neural automated theorem prover for higher-order logic. CoRR, abs/1608.02644 2016
Acknowledgement
The work has been supported by the National Natural Science Foundation of China under grant no. 61772038, 61532019 and 61272160, and the Guandong Science and Technology Department (Grant no. 2018B010107004).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 IFIP International Federation for Information Processing
About this paper
Cite this paper
Nawaz, M.S., Sun, M., Fournier-Viger, P. (2019). Proof Guidance in PVS with Sequential Pattern Mining. In: Hojjat, H., Massink, M. (eds) Fundamentals of Software Engineering. FSEN 2019. Lecture Notes in Computer Science(), vol 11761. Springer, Cham. https://doi.org/10.1007/978-3-030-31517-7_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-31517-7_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-31516-0
Online ISBN: 978-3-030-31517-7
eBook Packages: Computer ScienceComputer Science (R0)