Université d'Auvergne Clermont1 | CNRS

.

Séminaires

See also the ISIT news, in particular the notified journal club sessions. If you want to receive our seminar announces, please use the form on the subscription page.

25 juin 2014
Fakhri Torkhani

Nowadays, 3-D triangular meshes are increasingly used in modern multimedia devices and services. In many applications, it is possible that meshes surfaces undergo some lossy operations which can impair the visual quality. To ensure an acceptable delivered quality of service, perceptual quality of 3-D static and dynamic meshes should be properly evaluated and controlled. In this presentation, we will propose new objective metrics able to faithfully evaluate the perceptual quality of 3-D meshes. The performances of our objective metrics are evaluated using experimental studies carried out to collect subjective human opinion scores of distorted meshes. Finally, we will present few applications to show the potential practical use of the proposed metrics.

28 janvier 2014
Richard Hartley

I will talk about recent results from a number of people in my group
on Riemannian manifolds in computer vision.  In many Vision problems

Riemannian manifolds come up as a natural model.  Data related to
a problem can be naturally represented as a point on a Riemannian manifold.

This talk will give an intuitive introduction to Riemannian manifolds,

and show how they can be applied in many situations. 

Manifolds of interest include the manifold of Positive Definite matrices and the Grassman Manifolds,
which have a role in object recognition and classification, and the Kendall
shape manifold, which represents the shape of 2D objects.

Of particular interest is the question of when one can define positive-
definite kernels on Riemannian manifolds.  This would allow the 
application of kernel techniques of SVMs, Kernel FDA, dictionary
learning etc directly on the manifold.

12 novembre 2013
Thomas Dietenbeck

An algorithm to segment and track the myocardium using the level-set formalism is described. To this end, two priors are proposed: a shape prior based on a geometric model (hyperquadrics) and a motion one expressed as a level conservation constraint on the implicit function associated to the level-set. These priors term are coupled with a local data attachment term and a thickness term that prevents both contours from merging. The algorithm is validated on 20 echocardiographic sequences with manual references of 2 experts.

10 octobre 2013
Damien Gonzalez

Digital segmentation algorithms such as active contour models often use
signal parameters as energy.

Estimation of differentials is almost mandatory for most of them as they
use regularization terms like the snake algorithm.

The presentation will describe the fast level-wise convolution (LWC) and
its complexity of O(2n.log2(m)).

Finally i will show two LWC compatible kernel families.

10 septembre 2013
Vincent Nivoliers

This presentation will describe the optimisation of Restricted Voronoï Diagrams.
Such a structure is meant to partition a mesh between a set of sample points,
associating each point of the mesh to its nearest sample. With this tool, many
applications can be derived in sampling optimisation and mesh generation. I will
first describe how to optimize objective functions defined on Restricted Voronoï
Diagrams, and especially how to compute their gradient. I will then show how to
use these results to optimise a sampling of a general function defined on a
mesh. Finally, I will show how a distance between two meshes can be viewed as
an objective function on a Restricted Voronoï Diagram, and optimized to obtain a
surface fitting algorithm.

More material on the results I will be presenting can be found on my personal
web page : http://alice.loria.fr/~nivoliev in my PhD (French), and two of my
publications (Sampling Functions on a Mesh [...] and Fitting Polynomial Surfaces
[...]).

5 juillet 2013
Arnaud Fournel

 

Depuis une quinzaine d’années, l’Imagerie par Résonance Magnétique fonctionnelle (IRMf) per- met d’extraire de l’information sur le fonctionnement cérébral et particulièrement sur la localisation des processus cognitifs. L’information contenue par les acquisitions en IRMf est extraite à l’aide du modèle linéaire général et du processus d’inférence statistique. Bien que cette méthode dite « classique » ait permis de valider la plupart des modèles lésionnels de manière non invasive, elle souffre de certaines limites. Pour résoudre ce problème, différentes techniques d’analyse ont émergé et proposent une nouvelle façon d’interpréter les données de la neuroimagerie.

Nous présentons deux nouvelles méthodes multivariées basées sur les cartes de Kohonen. Nos méthodes analysent les données IRMf avec le moins d’a priori possibles. En parallèle, nous tentons d’extraire de l’information sur les réseaux neuronaux impliqués dans les émotions.

La première de ces méthodes s’intéresse à l’information de spécialisation fonctionnelle et la seconde à l’information de connectivité fonctionnelle. Nous présentons les résultats qui en découlent, puis chacune des méthodes est comparée à l’analyse dite classique en termes d’informations extraites. De plus, notre attention s’est focalisée sur la notion de valence émotionnelle et nous tentons d’établir l’existence d’un éventuel réseau partagé entre valence positive et valence négative. La constance de ce réseau est évaluée à la fois entre modalités perceptives et entre catégories de stimuli.

Chacune des méthodes proposées permet de corroborer l’information recueillie par la méthode classique, en apportant de nouvelles informations sur les processus étudiés.

Du point de vue des émotions, notre travail met en lumière un partage du réseau cérébral pour les valences négative et positive ainsi qu’une constance de cette information dans certaines régions cérébrales entre modalités perceptives et entre catégories. 

(EMC/Université Lyon 2)

25 juin 2013
Pablo Mesejo

This talk will be focused on the development of algorithms for the automatic segmentation of anatomical structures in biomedical images, particularly the hippocampus in histological images from the mouse brain. Such algorithms will be based on computer vision techniques and artificial intelligence methods. More precisely, on the one hand, we take advantage of statistical shape models to segment the anatomical structure under consideration and to embed the segmentation into an optimization framework. On the other hand, metaheuristics and classifiers are used to perform the optimization of the target function defined by the shape model (as well as to automatically tune the system parameters), and to refine the results obtained by the segmentation process, respectively. Different methods, with their corresponding advantages and disadvantages, will be introduced during the presentation.

28 mai 2013
Anuj Srivastava

I will present a comprehensive framework for analyzing shapes of 2D and 3D objects, by focusing on their boundaries as curves and surfaces. An important distinction is to treat boundaries not as point sets or level sets, as is commonly done,  but as parameterized objects. However, parameterization adds an extra variability in the representation, as different re-parameterizations of an object do not change its shape. This variability are handled by defining quotient spaces of object representations, modulo re-parameterization and rotation groups, and inheriting a Riemannian metric on the quotient space from the larger space. For curves in Euclidean spaces, we use an elastic Riemannian metric that can be viewed as an extension of the classical Fisher-Rao metric, used in information geometry, to higher dimensions. Furthermore, we define a specific square-root representation that reduces this complicated metric to the standard L2 metric and, thus, greatly simplifying computations such as geodesic paths, sample means, tangent PCA, and stochastic modeling of observed shapes. For surfaces, we have proposed a similar square-root representation and an elastic Riemannian metric, that allows parameterization-invariant shape analysis 
of 3D objects.

I will demonstrate these ideas using applications from computer vision, biometrics and activity recognition, protein structure analysis, and anatomical shape analysis.

Anuj Srivastava's Website: http://stat.fsu.edu/~anuj/

20 mars 2013
Manuel Grand-Brochier

Le concept d'analyse d'images peut se décliner en diverses thématiques, allant de la photogrammétrie à la segmentation d'objets, en passant par l'extraction d'amers ou encore l'analyse contextuelle. Dans cet exposé nous nous intéresserons à la caractérisation de points d'intérêt et à la segmentation d'objets sur fond complexe.

Depuis les années 90, l'analyse locale de l'information présente dans l'image a pris un essor considérable dans de nombreuses applications telles que l'aide à la localisation, le tracking, ou encore la reconnaissance de gestes ou d'objets. Pour répondre aux besoins croissants de ce type d'applications, nous nous focaliserons tout d'abord sur le développement de deux méthodes, l'une spatiale et l'autre spatio-temporelle, de description de points d'intérêt, basées sur une analyse anisotropique locale du signal. Une étude de leur influence sur des procédés d'aide à la localisation, de recalage de sous-séquences et de segmentation d'objets en découlera afin de mettre en avant les améliorations apportées.

La seconde thématique abordée au cours de cet exposé sera liée à un projet ANR dont la finalité se résume par l'identification et la classification de feuilles d'arbres. Après une description succincte des divers modules constituant ce projet, nous nous intéresserons plus précisément à la partie segmentation et à son influence sur la description et la classification des feuilles. La conception d'un benchmark de validation, avec vérités terrain, sera par la suite détaillée. Cet exposé se concluera par la mise en avant de la pertinence des choix opérés au sein de ce projet, en terme d'initialisation (traces, cartes de distance, estimation polygonale) et d'outils de segmentation.

19 mars 2013
Ikhlef Bechar

Model-based approaches have been established as a robust way to tackle computer vision and pattern recognition problems. Their practical performances are highly dependent on how efficient the representation (abstraction) of real-world objects on a computer is. Indeed, useful object representations need be general, discriminative, transformation invariant, numerically stable and easily implementable on a standard computer.

Throughout my talk, I will discuss the use of the higher-order active shape (HOAS) methodology for the representation and recognition of objects in vision problems. HOASes are indeed a particular class of Markovian interaction models and their basic claim is that objects along with their transformations can be represented intrinsically and modeled as stable objects (ie. minima) using weighted interactions between all possible n-uples of their points (n>=2). Namely, I will focus on the geometry of objects in an object detection and recognition perspective.

Therefore, after briefly presenting the general HOAS framework and providing its main stability (optimality) result, I will describe in detail three of its classes:

  • The traditional SOAC (2nd order) class and a proof of its limitation.
  • The FOAC (4th order) class as a general Euclidean invariant shape modeling framework of homogenous resp. heterogeneous shapes.
  • The extended SOAC (2nd class) as a general transformation invariant resp. heterogeneous shape modeling framework.

For each HOAS class, after presenting its theory and interpretation, I will describe the efficient implementation of both its learning (modeling) and operational (on-line) steps. I will also describe the strength and the weakness of each class. The talk will be supported by practical results on 2D IR and stereo images.

If time allows it, I will devote a third part of my talk to describing the convex relaxation of HOAS models and how the latter compares with other numerical resolution approaches (such as level-sets and graph-cuts).

18 janvier 2013
Saleh Mosaddegh (Greyc, ENSICAEN)

 

The analysis of fingerprints plays a major role for the police and justice agents, e.g. to establish the proof of a crime. Latent fingerprints can be invisible but are revealed by a monochromatic powder applied with a pencil. Then they are lifted using an adhesive tape, to be analyzed and identified once back in the lab. Several drawbacks are inherent to this manual method including the involuntary deterioration of the fingerprint, the heavy, time-consuming and costly procedure of gathering and analyzing a large amount of fingerprints and finally the fact that gathering the fingerprint removes it from its support, which can deprive the police of potential supplementary elements of proof. In this talk, we present an automatic photographic acquisition system to capture images of fingerprints, from the physical acquisition device to the software that automatically yields a 3D reconstruction of the fingerprint, i.e. the fingerprint and the surface on which the print is laid. The proposed technological solution is innovative, as it relies on only a single captured color image of the scene, on which structured light is projected and it also use the same image to recover the texture of the scene by removing the pattern. Thus, the acquisition system is portable and as easy to use as a standard camera.
10 janvier 2013
Fethi Bereksi

 

Abstract 
 
The emotion recognition is one of the great challenges in human-human and human- computer interaction.
In this presentation, an approach for the emotions recognition based on physiological signals is proposed. Six basic emotions: joy, sadness, fear, disgust, neutrality and amusement are analysed using physiological signals.
These emotions are induced through the presentation of IAPS pictures (International Affecting Picture System) to the subjects.
The physiological signals of interest in this analysis are: electromyogram signal (EMG), respiratory volume (RV), skin temperature (SKT), skin conductance (SKC), blood  volume pulse (BVP) and heart rate (HR). These are selected to extract some characteristic parameters (temporal and frequential parameters), which will be used for classifying the emotions.
The SVM (support vector machines) technique is used for classifying these parameters.
 
 
 
Prof . BEREKSI REGUIG Fethi
Directeur du laboratoire de Recherche en Génie Biomédical et Responsable du Master en Instrumentation Biomédicale
Département de Génie Electronique et Electronique
Faculté de Technologie
Université de Tlemcen, Algérie
 
Domaines d’intérêts
Instrumentation Biomédicale et traitement des signaux physiologiques et Electrophysiologiques,
5 décembre 2012
Nicole Artner

This talk is about the two main topics of my research. Firstly, a method to extract a part-based model of an observed scene from a video sequence. It is based on the idea that things that move together throughout the whole video belong together and define a “rigid” object or part. A set of successfully tracked feature points is used for the necessary observations. By employing a graph pyramid, the feature points can be grouped depending on their motion over time. The result is a hierarchical description (graph pyramid) of the scene, where each vertex in the top level of the pyramid represents a “rigid” part of the foreground or the background, and encloses the salient features used to describe it. Secondly, an approach to track arbitrary objects in challenging scenes with simple trackers (e.g. Mean Shift). This is realized by describing and tracking the target object with a spring system represented by an attributed graph. A spring system encodes the spatial relationships of the features describing the target object and enforces them by spring-like behavior during tracking. Tracking is done in an iterative process by combining the hypotheses of simple trackers with the hypotheses extracted from the spring system.

 

speaker website: http://www.neolin.net/?page_id=474

Lab  PRIP (TU Wien, Austria): http://www.prip.tuwien.ac.at/

20 septembre 2012
Francesc Moreno-Nogue

In this talk, I will first present an approach to the PnP problem -the
estimation of the pose of a calibrated camera from n point correspondences
between an image and a 3D model- whose computational complexity grows
linearly with n. Our central idea is to express the 3D points as a weighted
sum of four virtual control points. The problem then reduces to estimating
the coordinates of these control points in the camera referential, which can
be done in O(n) using simple linearization techniques. I will then show that
the same type of approach can be applied to register non-rigid 3D surfaces.
However, since monocular non-rigid reconstruction is severely
under-constrained we will have to consider additional constraints, either
based on local rigidity (to reconstruct deformable and inextensible
surfaces), or based on shading coherence (to reconstruct deformable and
stretchable surfaces).

In the final part of the talk, I will discuss the major limitations of these
linear formulations and propose a novel and alternative stochastic
exploration strategy. I will present results both for non-rigid shape and
human pose recovery.