During the course the measure theoretic foundations of probability theory will be treated. Key words for the course are: random variables, distributions of random variables, different convergence concepts for random variables (convergence in probability, weak convergence, convergence in p-th mean) and relations between them, characterisctic funtions, central limit theorems, conditional expectation and conditional distribution. |
Percolation theory deals with the study of random subgraphs of Zd.
Consider the usual integer lattice Zd with edges between
nearest neighbours. Fix some probability p\in (0,1). Each edge is
now retained with probability p, and deleted with probability 1-p,
independently of all other edges. What is left, is a random subgraph Gp
of the original graph, whose properties heavily depend on the retention
parameter p.
The first interesting question concerns the existence of infinite components. When p is small, most edges will be deleted and one expects that all graph components of Gp are finite. When p is close to one, one might expect infinite components in Gp. The first interesting result says that indeed there is a critical point 0 <pc= pc(d) <1 that depends on the dimension, such that when p< pc, Gp has no infinite components with probability one, but when p>pc, Gp contains at least one infinite component with positive probability. What happens at the critical point, that is when p=pc, is known in two and high dimensions, but for instance not in three: in two and high dimension, no infinite components exist almost surely at the critical point. When p>pc, infinite components exist with positive probability, but how many? We shall prove, surprisingly, that for p>pc there is with probability one exactly one infinite component. We shall also derive properties of the so called percolation function which is defined as the probability that the origin is contained in an infinite component as a function of p. When p< pc, all components are finite, but how big are they? We shall prove that the probability that the component which contains the origin contains more than k points, tends to zero exponentially fast in k. What happens at p=pc is not so well understood. The critical point pc(d) is known only for d=2: pc(2)=1/2. Despite the simple value, this is a very deep result which took 20 years to prove until it was first conjectured. We shall be able to give a modern proof of this fact. In two dimensions, we shall also show that all components are finite a.s. at the critical point. The probability that the component which contains the origin contains more than k points must therefore tend to zero when k tends to infinity. We shall see that this does not go exponentially fast (as in the case where p<pc). |
This course is an introduction to the theory of statistical time series with special attention for financial time series. A statistical time series is a sequence of random variables Xt, the index t being referred to as "time". Typically the variables are dependent and one aim is to predict the "future" given observations on the "past". Among the time series models we discuss the classical ARMA processes, and also the GARCH and stochastic volatility processes, which have become popular models for financial time series. We study the existence of stationary versions of these processes, and, if time allows, also the unit-root problem. Within the context of nonparametric estimation we also discuss the ergodic theorem and extend the central limit theorem to dependent ("mixing") random variables. Thus the course is a mixture of probability and statistics, with some Hilbert space theory coming in to develop the spectral theory and the prediction problem. Many of the procedures that we discuss are implemented in the statistical computer package Splus, and are easy to use. We recommend trying out these procedures, because they give additional insight that is hard to obtain from theory only. A hand-out on Splus is provided. We assume that the audience is familiar with measure theory, and basic concepts of statistics. Knowledge of measure-theoretic probability, stochastic convergence concepts, and Hilbert spaces is recommended. We presume no knowledge of time series analysis. |
Certain algorithms for stochastic optimization are treated and applied to controlled queueing systems. Students implement several algorithms and experiment with them. |
At the University of Amsterdam industrial statistics is co-ordinated by the
Institute for Business and Industrial Statistics (IBIS UvA). In this course
research topics of the institute will be treated.
In the last decade attention in quality control - one of the application areas of industrial statistics - has shifted from statistical process control (SPC) to the Six Sigma programme, a methodology for conducting improvement projects based on statistical investigation. As a result, the research done at IBIS UvA - which used to be focused on SPC and control charts - has evolved in the direction of topics which play a role in the Six Sigma methodology. Topics of the course
Selected publications which will be used in the course
[1] De Mast, J. (2002). Quality Improvement from the Viewpoint of
Statistical Method. PhD.-thesis, University of Amsterdam. |
We will study the book "Poisson Processes", by J.F.C. Kingman, Oxford Science Publications. The course will be in a seminar form, that is, the students prepare and present part of the text, after a number of introductory lectures by the teacher. Most textbooks on probability theory mention the Poisson process, but most hurry past to more general point processes or Markov processes. This neglect is ill judged, and stems perhaps from a lack of perception of the real importance of the Poisson process. This distortion comes in turn partly from a restriction to one dimension, and the theory becomes more natural and more powerful in a more general context. In this seminar, we discuss the most important issues associated to the Poisson process, notably the superposition theorem, the mapping theorem, Campbell's theorem, the characteristic functional, Renyi's theorem, marked Poisson processes and the colouring theorem. We will also discuss applications. |
A stochastic process is a collection of random variables, indexed by a set T. The elements of T are thought of as time points. Typically, T is the set of natural numbers, or an interval of the real line. In this course we study some important classes of stochastic processes. Our main focus will be on so-called martingales and Markov processes, and on the relation between these processes. The most important example of a stochastic process that is both a martingale and Markovian is the Brownian motion process. Using the general results on martingales and Markov processes, we will study the properties of Brownian motion in some detail. |
Stochastic calculus is an indispensable tool in modern financial mathematics. In this course we present this mathematical theory and apply it to the problem of pricing and hedging of financial derivatives. We treat the following topics from martingale theory and stochastic calculus: martingales in discrete and continuous time, construction and properties of the stochastic integral, Ito's formula, Girsanov's theorem, stochastic differential equations. As an application, we explain how stochastic differential equations are typically used to model financial markets and we discuss the problem of the pricing of derivatives such as stock options. |
Nowadays simulation methods based on random number generation on powerful computers play an important role in statistics. We highlight two methodologies, the bootstrap and Monte Carlo Markov chain simulation. The bootstrap method has been introduced in 1977 by Efron. This is a useful, generally applicable, but computationally intensive, method to construct, for instance, confidence intervals. The basic idea of the method is resampling from the original data. The naive bootstrap, paramatric bootstrap and smooth bootstrap shall be discussed. By running a computer simulated Markov chain for a suitably long time we can generate observations from a distribution close to the stationary distribution of the Markov chain. By choosing suitable transition probabilities practically any distribution can be simulated in this way. We will discuss the Metropolis algorithm, the basic algorithm for this kind of simulation, as well some of its refinements, bearing in mind the relevance for statistics. |
Classical Statistics considers models with a (finite dimensional) Euclidean parameter. These parametric models may be extended to so-called semiparametric models by adding infinite-dimensional parameters. One of the most important examples is the extension of the linear regression model with normal errors to the semiparametric regression model in which the errors are just assumed to have mean zero; the shape of the error distribution being the added infinite-dimensional parameter. The goal is asymptotically efficient estimation of the Euclidean regression parameters. The theory that will be discussed, has been developed in the last two decades. It will be illustrated via the above mentioned regression model (applied in e.g. econometrics), the symmetric location model, the Cox proportional hazards model (applied in medical statistics), and other models. |