Invited Speakers & Talks
-
On the provability of consistency
Abstract: We revisit the foundational question concerning Peano arithmetic PA:
(1) can consistency of PA be established by means expressible in PA?
The usual answer to (1) is “No, by Gödels Second Incompleteness Theorem.” In that theorem (G2), Gödel used an arithmetization of contentual mathematical reasoning and established that the arithmetical formula representing PA–consistency is not derivable in PA. Applying G2 to (1), one makes use of the formalization thesis (FT):
FT: any proof by means expressible in PA admits Gödels arithmetization.
Historically, there has been no consensus on FT; Gödel (1931) and Hilbert (1934) argued against an even weaker version of FT with respect to finitary proofs, whereas von Neumann accepted it.
Note that the aforementioned negative answer to (1) is unwarranted: here is a counter-example to FT. Let \(Ind(F) \) denote the induction statement for an arithmetical formula \(F\). The claim \(C\), “for each formula \(F\), \(Ind(F) \)”, is directly provable by means of PA: given any \(F\), argue by induction to establish \(Ind(F) \). However, \(C\) is not supported by any arithmetization as a single formula since PA is not finitely axiomatizable.
We provide a positive answer to (1). We offer a mathematical proof of PA–consistency,
no finite sequence of formulas is a PA–proof of \(0=1\),
by means expressible in PA, namely, by partial truth definitions. Naturally, this proof does not admit Gödels arithmetization either.
-
Iterated truth definitions based on Tarskian biconditionals 1
Abstract: We consider extensions of the language of Peano arithmetic by iterated truth definitions satisfying uniform Tarskian biconditionals. Given a language \(L\) we add a new symbol \(T\) for a truth predicate and postulate the equivalence of each sentence \(A\) (obtained by substituting numerical parameters into a formula) with \(T(\#{}A)\) where \(\#{}A\) is the Gödel number of \(A\). This extension procedure can be repeated transfinitely many times. Without further axioms, such extensions are known to be weak and conservative over the original system of arithmetic. Much stronger systems, however, are obtained by adding either induction axioms or, more generally, reflection axioms on top of them. Theories of this kind can be used to interpret some well-known “predicative” fragments of second-order arithmetic such as iterated arithmetical comprehension. Feferman and Schuette studied related systems of ramified analysis and determined the ordinal \(\Gamma_0\) as the bound to transfinite induction provable in “predicative” systems. We obtain sharp results on the proof-theoretic strength of these systems using methods of provability logic, in particular, we calculate their proof-theoretic ordinals and conservativity spectra. We consider a semilattice of theories enriched by suitable reflection operators and isolate the corresponding strictly positive modal logic (reflection calculus) axiomatizing the identities of this structure. The variable-free fragment of the logic provides a canonical ordinal notation system for the class of theories under investigation. This setup allows us to obtain in a technically smooth way suitable conservativity relationships for reflection principles of various strength and, in particular, to obtain new proofs of several classical results in proof theory.1Joint work with F. Pakhomov and E. Kolmakov.
-
A Brouwer algebra related to the Weihrauch lattice 2
Abstract: We prove that the Weihrauch lattice can be transformed into a Brouwer algebra by the consecutive application of two closure operators in the appropriate order: first completion and then parallelization. The closure operator of completion is a new closure operator that we introduce. It transforms any problem into a total problem on the completion of the respective types, where we allow any value outside of the original domain of the problem. This closure operator is of interest by itself, as it generates a total version of Weihrauch reducibility that is defined like the usual version of Weihrauch reducibility, but in terms of total realizers. From a logical perspective completion can be seen as a way to make problems independent of their premises. Alongside with the completion operator and total Weihrauch reducibility we need to study precomplete representations hat are required to describe these concepts. In order to show that the parallelized completed Weihrauch lattice forms a Brouwer algebra, we introduce a new multiplicative version of an implication. While the parallelized completed Weihrauch lattice forms a Brouwer algebra with this implication, the completed Weihrauch lattice fails to be a model of intuitionistic linear logic in two different ways. In order to pinpoint the algebraic reasons for this failure, we introduce the concept of a Weihrauch algebra that allows us to formulate the failure in precise and neat terms.2Joint work with Guido Gherardi.
-
Tropical Jacobian conjecture 3
Abstract: We prove for a tropical rational map that if for any point the convex hull of Jacobian matrices at smooth points in a neighborhood of the point does not contain singular matrices then the map is an isomorphism. We also show that a tropical polynomial map on the plane is an isomorphism if all the Jacobians have the same sign (positive or negative). In addition, for a tropical rational map we prove that if the Jacobians have the same sign and if its preimage is a singleton at least at one regular point then the map is an isomorphism.3Joint work with D. Radchenko.
-
Interval computations as applied constructive mathematics: from Shanin to Wiener and beyond
Abstract: Many algorithms of constructive mathematics are unfeasible, exhaustive–search algorithms. This fact makes many results of constructive mathematics — originally invented to make mathematical results computable and thus more practically useful — too far from practical applications.
How can we make constructive mathematics more realistic? Yuri Matiyasevich, renowned for his solution to the 10th Hilbert's problem, observed that while algorithms of constructive mathematics assume that we have inputs known with increasing accuracy, in practice, the accuracy is fixed. At any given moment of time, we only have a single measurement result \(\tilde x\), corresponding to the currently available accuracy \(\Delta\). As a result, the only information that we have about the (unknown) actual value \(x\) of the measured quantity is that it belongs to the interval \({\bf x}=[\tilde x-\Delta,\tilde x+\Delta]\). Given a data processing algorithm \(y=f(x_1,\ldots,x_n)\) and intervals \({\bf x}_1,\ldots,{\bf x}_n\) corresponding to the inputs, we must therefore describe the corresponding range of possible values of \(y\). This problem is called the problem of interval computations, or interval analysis.
In this problem, techniques borrowed from constructive mathematics work so well that many researchers — including Yu. V. Matiyasevich himself — consider interval analysis Applied Constructive Mathematics. Interval analysis has numerous practical applications ranging from robotics to planning spaceship trajectories to chemical engineering.
The main idea of interval computations can be traced to Norbert Wiener. Its algorithms were developed by Ramon Moore in the late 1950s and early 1960s. (And, by the way, Yuri Matiyasevich boosted this area by organizing conferences and by helping to launch a journal — then called Interval Computations — which remains, under the new, somewhat more general title Reliable Computing, the main journal of the interval computations community.)
In this talk, we will expand on this relation, and provide a brief overview of the main achievements of interval computations – and of their relation to constructive mathematics.
-
Applications of Orevkovs theorem on Glivenko classes to logic programming
Abstract: Vladimir Orevkov found several classes of formulas with the Glivenko property: if a formula in this class is provable classically then it is provable intuitionistically. His results are used in the design of software for verifying the equivalence of programs in the input language of the logic programming system CLINGO (ongoing project, joint with Patrick Lühne and Torsten Schaub). When two logic programs correspond to intuitionistically equivalent sets of formulas then we can conclude that they have the same semantics, but classical equivalence is, generally, not sufficient for drawing this conclusion. This fact makes it difficult to verify the equivalence of logic programs using automatic reasoning tools for classical logic. Orevkovs theorem, on the other hand, helps us identify classes of logic programs for which this difficulty does not arise.
-
On the computability with infinite time Blum–Shub–Smale machines 4
Abstract: We study the computability over the reals given by Blum–Shub–Smale machines working in ordinal time: it is allowed to execute a limit step \(\alpha\) provided that all the values in its registers converge to finite limits as time tends to \(\alpha\); after it we can execute one more step, etc. The basic facts about this computability as well as its specific properties are presented. We also discuss the adequacy of this approach to analysis.4Joint work with Peter Koepke.
-
On reflection principles and iterated consistencies
Abstract: We consider reflection principles comprising proof and/or provability predicates in Peano Arithmetic. Every such principle is shown to be provably equivalent to either local reflection or an iterated consistency assertion; therefore, the reflection principles constitute noncollapsing hierarchy with respect of their deductive strength. The joint logic GLA (stands for Gödel–Löb–Artëmov) of formal provability and explicit proofs has been a principal tool of studying these arithmetical principles.
-
Set–based proof–verification and its complexity 5
Abstract: The inferential armory of an automated proof assistant based on Set Theory must include algorithms which, inside specific fragments of the underlying symbolic language, can establish whether or not a given formula is satisfiable in the von Neumann universe. One such decision algorithm, regarding the fragment named Multi–Level Syllogistic (in short MLS), was devised in 1979 and proliferated many others; today, an enriched variant of it plays a key role in the proof–checker AEtnaNova. Since the satisfiability problem for MLS is NP–complete, in view of the pervasiveness of its decision mechanism in actual uses of AEtnaNova, an investigation was undertaken aimed at identifying useful sublanguages of MLS whose satisfiability tests have polynomial–time worst–case complexity. This contribution, grounded on ongoing research, reports on a comprehensive taxonomy of polynomial and NP–complete fragments of set theory.5Joint work with Domenico Cantone and Pietro Maugeri.
-
Definite mathematical problems, indefinite extensibility and semi–intuitionistic theories of sets
Abstract: Brouwer argued that limitation to constructive reasoning is required when dealing with “unfinished” totalities. Dummett argued that the correct logic is intuitionistic, not classical. Most famous are his meaning-theoretic arguments but there is also a separate argument based on the notion of indefinite extensibility. In recent times, Feferman argued that certain mathematical problems, notably CH, are indeterminate employing semi-intuitionistic set theories as a formal framework for presenting his case. In the talk I'll try to say something about a common core that connects these arguments and also how, from a metamathematical point of view, different levels of indefiniteness/definiteness can be treated in the single framework of semi-intuitionistic theories of sets, whose basic logic is intuitionistic, but for which the law of excluded middle is accepted for bounded formulas.
-
Approximation of decision problems
Abstract: Property testing distinguishes a structure which satisfies a property from a structure which is \(\varepsilon\)–far from satisfying the property, using a constant number of queries (samples). In some cases, a sublinear number of queries may also be considered. We first show that VPL (Visibly Pushdown Languages) can be recognized by a streaming property tester with \(O(\mathrm{poly}(\log n)/\varepsilon)\) queries (joint work with F. Magniez, O. Serre and N. François). We then consider graphs which follow a power law degree distribution, as a stream of edges and study the existence of clusters. We give probabilistic algorithms which detect with \(O(\sqrt{n})\) queries, clusters of size \(O(\sqrt{n})\) with high probability (recent work with Claire Mathieu) and which may also approximate the clusters.
-
Sam Sanders
-
The hubris of reducing the uncountable to the finite/countable 6
Abstract: We study the logical and computational properties of basic theorems of uncountable mathematics, including the Cousin and Lindelöf lemmas, published in 1895 and 1903. Historically, these lemmas were among the first formulations of open–cover compactness and the Lindelöf property, respectively. These notions are of great conceptual importance: the former is commonly viewed as a way of treating uncountable sets like e.g. \([0,1]\) as “almost finite”, while the latter allows one to treat uncountable sets like e.g. the real numbers \(\mathbb{R}\) as “almost countable”. This reduction of the uncountable to the finite/countable turns out to have a considerable logical and computational cost: we show that the aforementioned lemmas, and many related theorems, are extremely hard to prove, while the associated sub–covers are extremely hard to compute. Indeed, in terms of the standard scale (based on comprehension axioms), a proof of these lemmas requires at least the full extent of second-order arithmetic, a system originating from Hilbert–Bernays Grundlagen der Mathematik. This observation has far–reaching implications for the Grundlagens spiritual successor, the program of Reverse Mathematics, and the associated Gödel hierarchy. We discuss similar theorems relating to measure theory, topology, and the gauge integral, a generalization of the Lebesgue and improper Riemann integrals that also uniquely provides a direct formalization of Feynmans path integral.6Joint work with Dag Normann.
-
Non-well-founded derivations in the Gödel–Löb provability logic
Abstract: The Gödel–Löb provability logic GL is a modal logic describing all universally valid principles of the formal provability in Peano arithmetic. A proof-theoretic presentation of this logic in a form of a sequent calculus allowing circular and non-well-founded proofs was given several years ago. In this talk, I would like to discuss Hilbert-style non-well-founded derivations in GL and consider algebraic and neighbourhood semantics of this logic with the obtained derivability relation.
-
Universal statements and Kolmogorov complexity
Abstract: The idea to classify statements of some logical form according to their “complexity” was suggested by Cristian and Elena Calude. It can be clarified as follows. Consider a universal \(\Pi_1\) arithmetical statement \(A(n)\) with one integer variable \(n\). Then, for every numeral \(N\), we get a closed universal statement \(A(N)\). The complexity \(C_A(T)\) of a universal statement \(T\) with respect to \(A\) can be defined as the logarithm of a minimal \(N\) such that \(A(N)\) is provably equivalent to \(A\). The usual Solomonoff–Kolmogorov argument proves that there exist a universal statement \(A(n)\) that makes the function \(C_A\) minimal up to \(O(1)\) additive term. An interesting special case of universal statements are statements of the form \(C(x)>y\), where \(C\) is usual Kolmogorov complexity function and \(x\) and \(y\) are numerals. Some of them are quite strong: there exist an incompressible \(x\) such that the claim of its incompressibility implies (in PA) all true universal statements of the same or smaller complexity (up to \(O(1)\) term). We will discuss the properties of these statements: for example, not every universal statement is provable equivalent to one of them, though a similar question for conditional complexity seems to be open. In this way we see some interesting interplay between recursion theory and (elementary) proof theory.
-
The finitary content of sunny nonexpansive retractions 7
Abstract: The goal of proof mining is to extract quantitative information out of proofs in mainstream mathematics which are not necessarily fully constructive. Often, such proofs make use of strong mathematical principles, but a preliminary analysis may show that they are not actually needed, so the proof may be carried out in a system of strength corresponding roughly to first-order arithmetic. Following a number of significant advances in this vein by Kohlenbach in 2011 and by Kohlenbach and Leustean in 2012, we now tackle a long-standing open question: the quantitative analysis of the strong convergence of resolvents in classes of Banach spaces more general than Hilbert spaces.
This result was proven for the class of uniformly smooth Banach spaces by Reich in 1980. What we do is to analyze a proof given in 1990 by Morales, showing that adding the hypothesis of the space being uniformly convex, and thus still covering the case of \(L^p\) spaces, can serve to eliminate the strongest principles used in the proof by way of a modulus of convexity for the squared norm of such spaces. A further procedure of arithmetization brings the proof down to System \(T\) so that the proper analysis may proceed. After obtaining a non-effective realizer of the metastability statement, this is majorized in order to obtain the desired rate. Subsequent considerations calibrate this bound to \(T_1\). In particular, this result completes some analyses that had previously been obtained only partially, yielding rates of metastability within the above-considered class of Banach spaces for the Halpern and Bruck iterations.7Joint work with Ulrich Kohlenbach.
-
-
The Verifier–Falsifier games with restrictions on computational complexity of strategies
Abstract: Traditionally, game semantics was developed to confirm or complement some existing semantics of logical systems. Usually it was established that the Verifier has a winning strategy in the game associated with the formula \(A\) iff \(A\) is derivable in some corresponding deductive system or true in a model-theoretical sense. It is well known that the restrictions on the classes of strategies used by Verifier and Falsifier may break this relationship, so the question of “admissible” restrictions has been studied: for example whether it is possible to consider only computable strategies and still obtain the same “adequate” semantics. We study the restrictions from a different point of view: how much the semantics may be deformed due to some natural asymmetry between players. We consider various kinds of semantic games (for example, games with backward moves). It turns out that under some conditions the Verifier may win even if the formula is not true in ordinary logical semantics. The conditions like the Verifier being able to compute an universal function for all possible strategies of Falsifier were considered. Our main concern is not foundational, but rather philosophical: nowadays more and more often one has to deal with different forms of “scientific” confirmation based on tests rather than proofs (that is closer to game semantics) and biased (or even dishonest) players that may have considerable computational resources.
-
Arbitrary natural numbers in the framework of Carnapian quantified modal logic 8
Abstract: Inspired by Fines work on arbitrary objects, we discuss how “arbitrary” — or “generic” — natural numbers can be dealt with. It was proposed by Kripke that reasoning about such objects can be represented using Carnapian modal logic with quantifiers over individual concepts (i.e., functions from possible worlds to elements of a given domain). Following this proposal, we develop a formal framework for dealing with arbitrary natural numbers; note that a distinctive feature of our presentation is that we supplement the original modal language with a special intensional predicate of being “specific”. Both the model–theoretic and complexity aspects of the resulting generic structure are investigated.8Joint work with Leon Horsten.
-
The fixed point calculation for modal logics with the Lewis arrow 9
Abstract: In classical modal logic the Lewis arrow reduces to the necessity operator. This well-known fact fails over a constructive base. This last insight opens the way for the study of multiple Lewisian versions of Löbs Logic. In this talk, we zoom in on the matter of explicit fixed points. The construction of explicit fixed points consists of two stages. In the first stage, one constructs fixed points for formulas whose main connective is modal. In the second stage, one extends the class of formulas with explicit fixed points to formulas in which the fixed point variable is modalized. In fact, one can go further as we will briefly indicate. We study two methods of constructing explicit fixed points for formulas whose principal connective is the Lewis arrow, the de Jongh–Visser method and the de Jongh–Sambin method. We explain that these calculations are independent of one another. We discuss the matter of the minimal logic that supports each calculation (reverse mathematics of fixed point calculations). In the de Jongh–Visser case we have an elegant characterization of that logic, but in the de Jongh–Sambin case, —surprisingly, since the de Jongh-Sambin calculation is simpler— the question of an elegant characterization is open. We show that the second stage is fairly general and can be realized over the basic logic iA. Finally, we will discuss the case where we have the Strong Löb Principle where all constructions become much simpler.9Joint work with Tadeusz Litak.
-
Anabelian geometry in a model theory setting 10
Abstract: I will present a model-theoretic formalism for treating analytic and pro-etale covers of algebraic varieties. This allows a reformulation of Grothendieck's anabelian geometry. It also allows to consider questions of categoricity of respective theories (or rather abstract elementary classes). A series of results obtained in 2002–2015 for semi-abelian varieties demonstrated that categoricity is equivalent to the classification of the action of Galois groups on the torsion subgroups together with relevant Kummer theory. In anabelian cases categoricity leads directly to conjectures about the action of Galois group on pro-finite fundamental group first raised in Grothendiecks “Esquisse dun programme”10The talk was canceled by the speaker on May 3, 2019.