Speakers

Confirmed speakers

External speakers and contributors

 

Ismail Ben Ayed, École de Technologie Supérieure, Canada

 

Ismail Ben Ayedis currently a Full Professor at the ETS Montreal. He is also affiliated with the CRCHUM. His interests are in computer vision, optimization, machine learning and medical image analysis algorithms. Ismail authored over 100 fully peer-reviewed papers, mostly published in the top venues of those areas, along with 2 books and 7 US patents.  In the recent years, he gave over 30 invited talks, including 4 tutorials at flagship conferences (MICCAI’14, ISBI’16, MICCAI’19 and MICCAI’20). His research has been covered in several visible media outlets, such as Radio Canada (CBC), Quebec Science Magazine and Canal du Savoir. His research team received several recent distinctions, such as the MIDL’19 best paper runner-up award and several top-ranking positions in internationally visible contests. Ismail served as Program Committee for MICCAI’15, MICCAI’17 and MICCAI’19, and as Program Chair for MIDL’20 and IEEE IPTA’17. Also, he serves regularly as reviewer for the main scientific journals of his field, and was selected several times among the top reviewers of prestigious conferences (such as CVPR’15 and NeurIPS’20).

Abstract - (common lecture with Jose Dolz) Weakly supervized deep learning

Weakly- and semi-supervised learning methods, which do not require full annotations and scale up to large problems and data sets, are currently attracting substantial research interest in both the CVPR and MICCAI communities. The general purpose of these methods is to mitigate the lack of annotations by leveraging unlabeled data with priors, either knowledge-driven (e.g., anatomy priors) or data-driven (e.g., domain adversarial priors). For instance, semi-supervision uses both labeled and unlabeled samples, weak supervision uses uncertain (noisy) labels, and domain adaptation attempts to generalize the representations learned by CNNs across different domains (e.g., different modalities or imaging protocols). In semantic segmentation, a large body of very recent works focused on training deep CNNs with very limited and/or weak annotations, for instance, scribbles, image level tags, bounding boxes, points, or annotations limited to a single domain of the task (e.g., a single imaging protocol). Several of these works showed that adding specific priors in the form of unsupervised loss terms can achieve outstanding performances, close to full-supervision results, but using only fractions of the ground-truth labels.

This presentation overviews very recent developments in weakly supervised CNN segmentation. More specifically, we will discuss several recent state-of-the-art models, and connect them from the perspective of imposing priors on the representations learned by deep networks. First, we will detail the loss functions driving these models, including, among others,  knowledge-driven functions (e.g., anatomy, shapes, or conditional random field losses), as well as commonly used knowledge and data-driven priors. Then, we will discuss several possible optimization strategies for each of these losses, and emphasize the importance of optimization choice.

Paul De Brem, journaliste scientifique, Paris

Paul de Brem is a professional anchorman specialised in space, scientific and technical events. During the last 15 years, he has hosted more than 500 symposiums and debates for various clients such as CNRS, Procter&Gamble, Région Ile-de-France, Inserm, EDF, Sanofi, Institut Pasteur, ministère de la Recherche, CNES, etc.

A year ago, he has hosted a two-day ministerial conference in english dedicated to higher education with 48 ministers from 4 continents.

He also leads communication courses for professionals: Media-training, Powerful PowerPoint, Writing for the Internet, etc. for clients such as CNES, Banque de France, Orange, the ENA (Ecole nationale d’administration), etc.

He has been leading courses in scientific journalism at Sorbonne Université for

8 years. Previously, as a science editor for television and printed media, he actively collaborated with LCI, France 2, France 24, le Journal du dimanche, L’Express, etc.

Narine Kokhlikyan, Facebook, Karlsruhe Institute of Technology, Germany


 

Narine is a Research Scientist at Facebook AI focusing on model interpretability. She is the main creator of Captum, the PyTorch library for model interpretability. Narine studied at the Karlsruhe Institute of Technology in Germany and was a Research Visitor at Carnegie Mellon University. Her research focuses on AI model understanding, cognitive systems, and natural language processing. She is also one of the early contributors to the open source Apache SparkR package.

 

Abstract - Advanced concepts in deep learning

In this talk we will review a number of model interpretability algorithms and demonstrate how we can apply those algorithms on Deep Neural Networks. We will analyze the pros and cons of different model interpretability approaches and show how they can be used to debug model predictions and internals.

Lastly, we will show how the insights learned from different model interpretability techniques can help us to improve our models. Practical examples will be given using Captum: a novel model interpretability library for PyTorch.

 

Hervé Lombaert, École de Technologie Supérieure, Canada

 

Hervé Lombaert is a Professor at ETS Montreal, Canada, where he holds a Canada Research Chair in Shape Analysis in Medical Imaging. His research focuses on the statistics and analysis of shapes in the context of machine learning and medical imaging. His work on graph analysis has impacted the performance of several applications in medical imaging, from the early image segmentation techniques with graph cuts, to recent surface analysis with spectral graph theory and graph convolutional networks. Hervé has authored over 60 papers, 5 patents, and has presented over 20 invited talks. He had the chance to work in multiple centers, including Microsoft Research (Cambridge, UK), Siemens Corporate Research (Princeton, NJ), Inria Sophia-Antipolis (France), McGill University (Canada), and the University of Montreal (Canada). His research has also received several awards, including the Erbsmann Prize in Medical Imaging.

More at [https://profs.etsmtl.ca/hlombaert]

Abstract - Geometric deep learning - Examples on brain surfaces

How to analyze complex shapes, such as of the highly folded surface of the brain?  This talk will show how spectral shape analysis can benefit general problems where data fundamentally lives on surfaces.  Here, we exploit spectral coordinates derived from the Laplacian eigenfunctions of shapes.  Spectral coordinates have the advantage over Euclidean coordinates, to be geometry aware and to parameterize surfaces explicitly.  This change of paradigm, from Euclidean to spectral representations, enables a classifier to be applied *directly* on surface data, via spectral coordinates.  Brain matching and learning of surface data will be shown as examples.  The talk will focus, first, on spectral representations of shapes, with an example on brain surface matching; second, on the basics of geometric, or spectral deep learning; and finally, on the learning of surface data, with an example on automatic brain surface parcellation.

 

Anirban Mukhopadhyay, Technische Universität Darmstadt, Germany

Dr. Anirban Mukhopadhyay obtained his PhD in Computer Science with a minor in Statistics from the University of Georgia, USA in 2014. He is currently leading the junior research group Medical and Environmental Computing (MECLab) at Technische Universität Darmstadt, Germany. His current research focus is on safe translation of AI toward healthcare. He has recently spearheaded the first collaborative review on Generative Adversarial Networks for Medical Image Analysis. His research efforts have been awarded with multiple prestigious awards, including the best thesis award in mathematical sciences 2014 from the University of Georgia and a Miccai society travel award. 

https://sites.google.com/site/geometricanirban/ 

Abstract - Generative and adversarial methods for medical imaging

Generative adversarial networks (GANs) are a powerful subclass of deep generative models that are currently receiving widespread attention from the medical imaging community. The key idea behind GANs is that two neural networks are jointly optimized in a competitive fashion: one network tries to synthesize samples that resemble real data points while a second network assesses how well the result corresponds to a reference database of samples. GANs have been successfully exploited in typical medical image analysis applications such as denoising, synthesis, reconstruction, segmentation, and detection. Moreover, GANs have led to new applications in paradigms such as semi-supervised learning and abnormality detection. In this lecture, I will provide basic as well as advanced material on GANs and adversarial methods in medical image analysis. We will focus on key state-of-the-art papers in the machine learning and computer vision literature and their relation to works in medical image analysis. To make these concepts tangible, I will also provide examples of applications in medical imaging.

 

Martin Vallières, Université de Sherbrooke

 

Martin Vallières est un nouveau professeur adjoint au département d’informatique de l’Université de Sherbrooke (avril 2020). Il a obtenu un doctorat en physique médicale de l’Université McGill en 2017 et a suivi une formation postdoctorale en France et aux États-Unis en 2018 et 2019. L’objectif principal de la recherche de Martin Vallières est centré sur le développement de modèles cliniquement exploitables pour mieux personnaliser les traitement du cancer («oncologie de précision»). Il est expert dans le domaine de la radiomique (analyse quantitative des images médicales) et de l’apprentissage automatique en oncologie. Au cours de sa carrière, il a développé plusieurs modèles prédictifs pour différents types de cancers. Son principal intérêt de recherche se concentre désormais sur l’intégration graphique des données médicales hétérogènes pour une oncologie de précision améliorée.

Ruud Van Sloun, Eindhoven University of Technology

 

Ruud JG van Sloun (Member, IEEE) received the B.Sc. and M.Sc. degrees (cum laude) in electrical engineering and the Ph.D. degree (cum laude) from the Eindhoven University of Technology, Eindhoven, The Netherlands, in 2012, 2014, and 2018, respectively. Since then, he has been an Assistant Professor with the Department of Electrical Engineering at the Eindhoven University of Technology and since January 2020 a Kickstart-AI fellow at Philips Research, Eindhoven. From 2019-2020 he was also a Visiting Professor with the Department of Mathematics and Computer Science at the Weizmann Institute of Science, Rehovot, Israel. He is a NWO Rubicon laureate and received a Google Faculty Research Award in 2020. His current research interests include artificial intelligence and deep learning for front-end (ultrasound) signal processing, model-based deep learning, compressed sensing, ultrasound imaging, and probabilistic signal and image analysis.

Abstract - Deep learning for image acquisition and reconstruction


Deep learning has revolutionized computer vision across many domains, including medical imaging. It has become the workhorse of advanced organ/structure segmentation and image classification in MRI, ultrasound, CT and beyond. These intelligent image analysis applications however still rely on traditional methods for upstream image formation. Today, advancing not just the downstream processing, but also the upstream image acquisition and reconstruction through deep learning is receiving increasing attention. This talk aims to convey the rationale, concepts, and methods that underpin this transition, while showing illustrative examples for both ultrasound and MRI. Throughout, we will place a particular emphasis on model-based deep learning methods, i.e. deep networks that leverage known signal structure by integrating models into deep networks (deep unfolding methods), and deep networks that are integrated into known model-based algorithms (data-driven hybrid algorithms). Lastly, we will discuss and exemplify the power of end-to-end optimization of entire imaging chains, from the upstream acquisition to the final downstream analysis

 

Erik Ziegler, Radical Imaging, LLC, Boston, United-States

Erik Ziegler is Director of Research and Development at Radical Imaging, a consultancy that provides software support for medical imaging projects. In 2014, he obtained his Ph.D. in Electrical Engineering from the University of Liège & Maastricht University as part of the Marie Curie training network in “NeuroPhysics”. Since that time, he has worked with collaborators in academia and industry to develop open-source software for web-based medical imaging, including Cornerstone.js and the Open Health Imaging Foundation (OHIF) framework (through Radical Imaging, under subcontract from Massachusetts General Hospital). He also acts as Lead Engineer for Novometrics LLC, which is working with researchers at Mass General Brigham to develop machine learning tools for use in oncology clinical trials

REPLACED by

Steve Pieper, Isomics, Cambridge, United-States

 stevepieper.jpg

 Dr. Steve Pieper is chief architect of the 3D Slicer platform and a renowned expert in medical imaging who has been cited over 7000 times in his career. He runs his own consultancy, Isomics Inc, and has an honorary position at the Surgical Planning Lab in the Department of Radiology, Brigham and Women's Hospital, Harvard Medical School.

The roud table will be composed of Pierre Croisille, Steve Pieper and Martin Vallières. Facilitator: Paul de Brem

 

 

Local speakers and contributors

 

Olivier Bernard, CREATIS laboratory, Lyon, France

Dr. Olivier Bernard has an MSc in Electrical Engineering and received a PhD in Medical Image Processing from the University of Lyon (INSA) - France - in 2006. In 2007, he was a postdoctoral research fellow at the Federal Polytechnic Institute of Lausanne (Switzerland) in the laboratory headed by Prof. Michael Unser.

In 2007, he became Associate Professor at the French University of Lyon and a member of the CREATIS laboratory (CNRS 5220, INSERM U1044, INSA-Lyon, University of Lyon). In 2008, he obtained the special mention (2nd prize) for best Ph.D. in France awarded by the IEEE Engineering in Medicine and Biology Society. In September 2013, he was an invited professor at Federal Polytechnic Institute of Lausanne (Switzerland) in the laboratory headed by Prof. Jean Philippe Thiran. He was an Associate Editor for the IEEE Transactions on Image Processing Journal (2013-2016) and was a member of the technical committee of the IEEE International Conference on Image Processing and the IEEE International Symposium on Biomedical Imaging (2014).

His current research interests include medical image analysis with a particular attention to cardiac imaging. He has a strong interest in machine learning, image segmentation, motion analysis, statistical modeling, sampling theories and image reconstruction

Abstract

Advanced concepts in deep learning 1

This lecture is the follow up of the "Basics in deep learning" lectures and will focus on state-of-the-art CNNs. We will present among the most famous architectures for classification (namely AlexNet, VGGNet, InceptionNet, ResNet and DenseNet), semantic segmentation (namely Encoder-decoder, UNet, ENet), localization (R-CNN, Fast and Faster R-CNN) and instance segmentation (Mask R-CNN).

 

Suzanne Bussod, laboratoire CREATIS, Lyon, France

She obtained an engineering degree in application development for image processing at Télécom Saint-Étienne and a master degree in advanced imaging at the Université Jean Monnet of Saint-Étienne in 2018.
Currently, she is pursuing her third year PhD in deep learning algorithms development for spectral X-ray computed tomography at the CREATIS lab. INSA Lyon. Her research interests are image and signal processing and analysis, medical imaging and deep learning.

Pierre Croisille Université Jean Monnet & CHU Saint-Etienne, CREATIS laboratory, Lyon, France

 

Pierre Croisille is Professor of Radiology at Université de Lyon / Université Jean Monnet (Saint-Etienne, France), and is Deputy Director of CREATIS Research Lab (CNRS 5220, INSERM U1216).  He is the Head of the Imaging Department, and Chairman of the Radiology and Nuclear Medicine in University Hospital CHU Saint-Etienne. He earned his MD and PhD degrees at the University of Lyon (France). He trained in Johns Hopkins University (Baltimore, USA) and Cantonal University Hospital (Geneva, Switzerland).  

His research is focusing on the development of innovative cardiac imaging approaches, including noninvasive new quantitative imaging methods and biomarkers to characterize myocardial and skeletal muscle damages. He is actively promoting the transfer of fundamental knowledge to the clinical needs as it is emphasized by his involvement in the development of several software solutions (inTag, CMRSegTools, CMRDiffTools) that are worldwide distributed as plugins within the open-source Horos platform in a clinical environment.

He has also experience in managing multicenter collaborative projects. He has been/is in charge of the management of the MR core-lab of several clinical trials. He is also actively involved as a board member and in charge of the supervision of a cardiac imaging data-bank (MRI, US) of the CARIM cohort that is collecting heterogeneous imaging data (raw-data, dicom files) connected to the bio-bank and clinical e-CRF, using an innovative distributed network spread across clinical sites.

He is one of the initiator of the Human Heart Project  a single point of reference to medical annotated imaging datasets that enables research teams to easily and rapidly share data, test computational methods and enhance collaboration around heart imaging and analysis (http://humanheart-project.creatis.insa-lyon.fr). Pierre Croisille is the author and/or co-author of more than 189 peer-reviewed papers mainly focusing on experimental, methodological or clinical applications of CMR (ResearchID H-4928-2014)

Christian Desrosiers, École de Technologie Supérieure, Canada

Prof. Desrosiers obtained a Ph.D. in Applied Mathematics from Polytechnique Montreal in 2008, and was a postdoctoral researcher at the University of Minnesota with prof George Karypis. In 2009, he joined École de technologie supérieure (ÉTS) as professor in the Departement of Software and IT Engineering. He is codirector of the Laboratoire d’imagerie, de vision et d’intelligence artificielle (LIVIA) and a member of the REPARTI research network. He has over 100 publications in the fields of machine learning, image processing, computer vision and medical imaging, and has served on the scientific committee of several important conferences in these fields.

Abstract - Basics in deep learning 2

This second talk on the basics of deep learning introduces fundamental concepts for training neural networks, including the backpropagation algorithm, network weight initialization and the choice of batch size. It also presents optimization techniques such as the Adam algorithm that are commonly used in deep learning. Moreover, we will give an overview of convolutional neural networks, which are at the core of most deep learning applications for image analysis, and present various concepts related to this architecture like feature maps and pooling. Finally, we will discuss powerful learning strategies to improve performance, including network pretraining and transfer learning. 

Jose Dolz, École de Technologie Supérieure, Canada

Jose Dolz obtained a PhD with the highest honors in Applied Mathematics from the University of Lille 2 in 2016. He was awarded with a Marie-Curie scholarship to pursue his doctoral studies. After that, he was postdoctoral researcher at the ETS Montreal, where he is currently Assistant Professor in the Dept. of Software and IT Engineering. His current research interests include medical image analysis, image segmentation, multi-modal learning and weakly supervised methods. Jose serves as regular reviewer for the main conferences and journals in medical imaging, machine learning and computer vision. Furthermore, he received the outstanding reviewer award for ECCV’20.

Abstract - (common lecture with Ismael Ben Ayed) Weakly supervized deep learning

Weakly- and semi-supervised learning methods, which do not require full annotations and scale up to large problems and data sets, are currently attracting substantial research interest in both the CVPR and MICCAI communities. The general purpose of these methods is to mitigate the lack of annotations by leveraging unlabeled data with priors, either knowledge-driven (e.g., anatomy priors) or data-driven (e.g., domain adversarial priors). For instance, semi-supervision uses both labeled and unlabeled samples, weak supervision uses uncertain (noisy) labels, and domain adaptation attempts to generalize the representations learned by CNNs across different domains (e.g., different modalities or imaging protocols). In semantic segmentation, a large body of very recent works focused on training deep CNNs with very limited and/or weak annotations, for instance, scribbles, image level tags, bounding boxes, points, or annotations limited to a single domain of the task (e.g., a single imaging protocol). Several of these works showed that adding specific priors in the form of unsupervised loss terms can achieve outstanding performances, close to full-supervision results, but using only fractions of the ground-truth labels.

This presentation overviews very recent developments in weakly supervised CNN segmentation. More specifically, we will discuss several recent state-of-the-art models, and connect them from the perspective of imposing priors on the representations learned by deep networks. First, we will detail the loss functions driving these models, including, among others,  knowledge-driven functions (e.g., anatomy, shapes, or conditional random field losses), as well as commonly used knowledge and data-driven priors. Then, we will discuss several possible optimization strategies for each of these losses, and emphasize the importance of optimization choice. 

Nicolas Duchateau, CREATIS laboratory, Lyon, France

Nicolas Duchateau is Associate Professor (Maître de Conférences) at the Université Lyon 1 and the CREATIS lab in Lyon, France. His research focuses on the statistical analysis of medical imaging data to better understand disease apparition and evolution, and to a certain extent computer-aided diagnosis. On the technical side, it mainly covers post-processing through statistical atlases and machine learning techniques. It also includes dedicated pre-processing and validation, among which the generation of synthetic databases. On the clinical/applicative side, it covers the study of cardiac function from heart failure populations, through routine imaging data and advanced 2D/3D shape, motion and deformation descriptors.

Nicolas Ducros, CREATIS laboratory, Lyon, France

 

 Nicolas Ducros has been an Associate Professor in the Electrical Engineering Department of Lyon University and with the Biomedical Imaging Laboratory CREATIS since 2014. His research interests include signal and image processing, and applied inverse problems with particular emphasis on single-pixel imaging and spectral computed tomography. His recent work focus on deep learning for image reconstruction and, in particular,  on network architectures that can be interpreted as conventional reconstruction methods. He is an Associated Member of the IEEE Bio Imaging and Signal Processing Technical Committee.

Abstract - Deep learning for inverse problems

In this talk, we will consider the reconstruction of an image from a sequence of a few linear measurements corrupted by noise. This generic problem has many biomedical applications, such as computerized tomography, positron emission tomography, and optical microscopy. First, we will first formalize the problem in a Bayesian setting where we estimate the missing measurements from those acquired. Then, we will establish a connection between Bayesian reconstruction and the recent approaches based on deep networks. We will illustrate this interpretation thanks to simple code examples relying on the SPyRiT Python package, itself based on PyTorch. Finally, we will focus on an optical problem where the set-up acquires some coefficients of the Hadamard transform of the image of the scene. We will present reconstruction results from experimental datasets acquired under various noise levels.

 

Thomas Grenier, CREATIS laboratory, Lyon, France

Dr. Thomas Grenier is Associate Professor at INSA Lyon Electrical Engineering department and at the CREATIS lab in Lyon, France.

My research focuses on longitudinal analysis of medical data to study evolution as Multiple Sclerosis lesions, functional activity (muscle and hydrocephaly). Most of these studies involve a segmentation task and dedicated pre and post processing steps. Clustering (spatio-temporal mean-shift), semi-supervised (multi-atlas with machine learning) or fully supervised (DNN) schemes are used to solve such problems considering their specific constraints.

 

Rémi Emonet, LabHC, Saint-Etienne, France

Rémi Emonet is Associate Professor (Maître de Conférence) at University Jean-Monnet and is leading of the Machine Learning project at Laboratoire Hubert Curien, in Saint Étienne.
He got a Ph.D. from the Grenoble university working at Inria, and spent some years, at Idiap research institute, Switzerland, working on probablistic models for unsupervised activity modeling in videos.
His current research and contributions focus on transfer learning, deep representation learning and anomaly detection.
He likes to manipulate Bayesian approaches and to try to derive meaningful guarantees for Machine Learning algorithms.

Abstract - Bayesian neural network - Uncertainty

It is often critical to know whether we can trust a prediction made by a learned model, especially for medical applications. In this session, we will start by better understanding the possible sources of uncertainty. After acknowledging that plain deep models are failing at estimating their own uncertainty, we will study the general solutions that are usually applied in everyday machine learning, including dropout and ensemble methods.

A further focus will be made on the formalism of Bayesian Neural Networks (BNN) which is related to Variation Inference (as in Variational Autoencoders). Instead learning a (single) value for the NN weights, a BNN learns a probability distribution over the weights, effectively learning a continuous ensemble of models. Among other applications, we will see that dropout can be seen as a particular BNN and that the Bayesian formulation has the advantage of opening the door for novel dropout and regularization approaches.

Pierre-Marc Jodoin, University of Sherbrooke, Canada

Pierre-Marc Jodoin is from  the University of Sherbrooke, Canada where he works as a full professor since 2007.  He specializes in the development of novel techniques for machine learning and deep learning applied to computer vision and medical imaging.   He mostly works in video analytics and brain and cardiac image analytics.  He is the co-director of the Sherbrooke AI plateform and co-founder of the medical imaging company called "Imeka.ca" which specializes in MRI brain image analytics. web site: http://info.usherbrooke.ca/pmjodoin/

 Abstract - Hands-on session 2

During this hands-on session, the participants will learn how to build and play around with a simple auto-encodeur trained on a dataset called "MNIST".  This will allow participant to familiarize themselves with the notions of  encoder network,  latent space and decoder network (the so-called "generative net").  Then, the participant will be introduced to variational auto-encoders and subsequently to  convolutional auto-encoders.  An application on 2D cardiac shapes will also be provided.

 Abstract - Basics in deep learning 1

During this presentation, we will explore the bedrock of neural networks.  We will see what an "artificial neuron" is, how it can be turned into a simple neural network (aka the Perceptron) and how that simple network can be turned into a deep neural network (aka a Multi-Layer Perceptron).  We will also discover what it means for a neural networks to be "trained" and, from it, explore fundamental concepts such as the Softmax layer, the cross-entropy loss function, the learning rate, and the network regularization.

Carole Lartizien, CREATIS laboratory, Lyon, France

Carole Lartizien received the bachelor’s degree in Nuclear Engineering from the National Polytechnic Institute, Grenoble, France, in 1996. She received the master’s degree in Biomedical Engineering and the Ph.D. degree in Image Processing from the University Paris XI, France, in 1997 and 2001, respectively. She is a Research Director of CNRS and is conducting research at the CREATIS laboratory in Lyon whose main areas of excellence concern the identification of major health issues that can be addressed by imaging and of theoretical barriers in biomedical imaging related to signal and image processing, modelling and numerical simulation. Her research interests include machine learning methods (kernel methods, deep learning) for classification problems and the prototyping of computer aided diagnostic system (CAD) for cancer and neuro- imaging.

https://www.creatis.insa-lyon.fr/~lartizien/

 

Odyssée Merveille, CREATIS laboratory, Lyon France

 

Odyssée Merveille has been an associate professor at INSA Lyon and at the Creatis laboratory since 2019.
She received a PhD degree in computer science from the Université Paris-Est in 2016 and was a postdoc at Université de Strasbourg. Her scientific interests include inverse problems and deep learning for medical imaging, in particular for the analysis of vascular networks.

 Abstract (common lecture with Emmanuel Roux) - Introduction to machine learning

Nowadays machine learning has a central role in many medical imaging applications. This first course is an introduction to machine learning with a focus on medical imaging. We will define the basic concepts that are required for the rest of the spring school (task, model, datasets, metric, loss...). These concepts will be illustrated on common applications encountered in the medical imaging field.

Fabien Millioz CREATIS laboratory, Lyon, France

Fabien Millioz graduated from the École Normale Supérieure de Cachan, France and received the M.Sc. degree in 2005 and Ph.D. degree in 2009 both in signal processing from the Institut National Polytechnique of Grenoble, France. Since 2011, he is lecturer at University Claude Bernard Lyon 1, and member of the Creatis lab since 2015.

His research interests are statistical signal processing, fast acquisition, compressed sensing and neural networks.

 

Bruno Montcel CREATIS laboratory, Lyon, France

Bruno Montcel is Associate Professor (Maître de Conférences - HDR) at the Université Lyon 1 and the CREATIS lab in Lyon, France. His research focuses on optical imaging methods and experimental set up for the exploration of brain physiology and pathologies. It mainly focuses on intraoperative and point of care hyperspectral optical imaging methods for medical diagnosis and gesture assistance.

Nathan Painchaud, University of Sherbrooke, Canada

 

Nathan Painchaud graduated with a B.Sc. in computer science from Université de Sherbrooke (UdeS) in 2019. He went on to pursue a masters at UdeS, for which he worked on guaranteeing the anatomical validity of automatic cardiac segmentation. He recently began a joint thesis between UdeS and INSA Lyon, and is currently a first year Ph.D. student at the VITAL lab, headed by Pierre-Marc Jodoin. His thesis focuses on representation learning to help characterize cardiac pathologies. Although he only started doing research recently, his work has already been published in top-tier conferences and journals (MICCAI, MIDL, IEEE TMI). His broad research interests cover artificial intelligence and its application to computer vision and medical imaging. In practice, he specializes in representation learning applied to cardiac image analysis.

 Abstract - Hands-on session 2

During this hands-on session, the participants will learn how to build and play around with a simple auto-encodeur trained on a dataset called "MNIST".  This will allow participant to familiarize themselves with the notions of  encoder network,  latent space and decoder network (the so-called "generative net").  Then, the participant will be introduced to variational auto-encoders and subsequently to  convolutional auto-encoders.  An application on 2D cardiac shapes will also be provided.

Emmanuel Roux, CREATIS laboratory, Lyon France

 

Emmanuel Roux is Associate Professor at the Université Lyon 1 and the CREATIS lab in Lyon (France) since 2019. He received the M.Sc. degree in electrical and electronics engineering and the M.Sc. degree in science for technologies and health from the Lyon National Institute of Applied Sciences (INSA-Lyon) in 2013. Since 2016 he is Ph.D. in acoustics and information engineering from the University of Lyon (France) and the University of Florence (Italy) where he worked on 2-D transducer optimization for 3-D ultrasound imaging. In 2017 he did a first post-doc on pyramidal filtering for ultrasound imaging at the University of Florence (Italy). During the years 2018 and 2019 he did a second post-doc about semi-supervised active deep learning for transfert learning at the Laboratoire Hubert Curien, (France). His current research concerns (deep) machine learning for medical imaging, with particular interests on 3-D ultrasound imaging, deep predictions from multi-modal (TEP-MRI) images, pathologic lungs CT image segmentation, weakly-supervised anomaly detection in Doppler signals, model interpretability, sparse and quantized neural networks.

 Abstract (common lecture with Odyssée Merveille) - Introduction to machine learning

Nowadays machine learning has a central role in many medical imaging applications. This first course is an introduction to machine learning with a focus on medical imaging. We will define the basic concepts that are required for the rest of the spring school (task, model, datasets, metric, loss...). These concepts will be illustrated on common applications encountered in the medical imaging field.

Michaël Sdika CREATIS laboratory, Lyon, France

Michaël Sdika is from the CREATIS lab in Lyon, France. His current research field focuses on the development of new analysis method based on deep learning for medical data. His main contributions are centered around image registration, atlas based segmentation, structure localization and machine learning for MR image of the nervous central system.

 

Online user: 3 Privacy
Loading...