Schedule

 

Monday, February 19

Morning (9:00 - 13:00 including 30-minute coffee break)

Opening ceremony

Dimensionality reduction for quantitative data – Part I (Maurizio Vichi)

 

Afternoon (14:00 - 18:00 including 30-minute coffee break)

Dimensionality reduction for quantitative data – Part II (Maurizio Vichi)

 

The two lectures provide an introduction to supervised and unsupervised learning. The tentative syllabus is:

- partitioning and hierarchical methods with a modelling approach;

- dimensionality reduction methods (Principal Component Analysis, Exploratory Factor Analysis, Confirmatory Factor Analysis and Structural Equation Modeling - SEM);

- sequential and simultaneous clustering and multidimensional reduction.

 

 

Tuesday, February 20

Morning (9:00 - 13:00 including 30-minute coffee break)

Dimensionality reduction for categorical data – Part I (Michael Greenacre)

 

The lecture provides an overview of Correspondence Analysis and related methods. The tentative syllabus is:

- correspondence analysis (CA) as the categorical equivalent of Principal Component Analysis;

- dimension reduction and clustering of cross-tabulations, frequency data, proportions and ratio-scale data in general;

- the logarithmic, Box-Cox and logratio transformations.

 

Afternoon (14:00 - 18:00 including 30-minute coffee break)

Dimensionality reduction for categorical data – Part II (Michael Greenacre)

 

The lecture focuses on Multiple Correspondence Analysis and related methods. The tentative syllabus is:

- multiple correspondence analysis (MCA) of multivariate categorical data;

- dimensionality and total variance of multivariate categorical data;

- subset MCA;

- clustering of categories in a multivariate categorical dataset;

- categorical Principal Component Analysis (catPCA).

 

 

Wednesday, February 21

Morning (9:00 - 13:00 including 30-minute coffee break)

Fuzzy unsupervised classification (Paolo Giordani)

 

The lecture focuses on fuzzy techniques for unsupervised classification. The tentative syllabus is:

- fuzzy logic;

- fuzzy extensions of the k-means algorithm;

- fuzzy extensions of the k-medoids (also labelled Partitioning Around Medoids, PAM) algorithm;

- fuzzy clustering of mixed data.

 

Afternoon (14:00 - 18:00 including 30-minute coffee break)

Model-based unsupervised classification (Roberto Rocci)

 

In the lecture the finite mixture model is introduced and discussed to implement a model-based approach to the unsupervised classification problem. The tentative syllabus is:

- definition and main properties;

- maximum likelihood estimation and the EM algorithm;

- finite mixture of Gaussians;

- outliers;

- choice of the number of components.

 

 

Thursday, February 22

Morning (9:00 - 13:00 including 30-minute coffee break)

Model-based classification (Roberto Rocci)

 

In the lecture the finite mixture model is evolved to cover cases where the analysis intention is not completely unsupervised. The tentative syllabus is:

- finite mixture of regression models;

- finite mixture of experts;

- semi-supervised classification;

- supervised classification.

 

Afternoon (14:00 - 18:00 including 30-minute coffee break)

Supervised Classification (Agostino di Ciaccio)

 

The lecture introduces nonlinear supervised classification techniques, widely used in machine learning. The tentative syllabus is:

- introduction and application fields of machine learning;

- evaluation methods in classification;

- ensemble methods based on classification trees;

- neural networks;

- supervised classification with neural networks.

 

 

Friday, February 23

Morning (9:00 - 13:00 including 30-minute coffee break)

Ongoing research (Lecturers present their current research interests)

 

Afternoon (14:00 - 18:00 including 30-minute coffee break)

Ongoing research (Participants, upon request, may present their current research interests)

Closing ceremony