DGCI2019

21st IAPR International Conference on

Discrete Geometry for Computer Imagery

ESIEE Paris,

25-29 March 2019

url: https://dgci2019.sciencesconf.org/

--------------------------------------------------------------

Call for Papers

---------------

Discrete geometry plays an expanding role in the fields of shape modelling, image synthesis, and image analysis. It deals with topological and geometrical definitions of digitized objects or digitized images and provides both a theoretical and computational framework for computer imaging.

The aim of the DGCI conference is to bring together researchers, students and practitioners of discrete geometry, including discrete topology, geometric models, and mathematical morphology, to present and discuss new advances in these fields, be they purely theoretical developments or novel applications.

DGCI 2019 is organized by Laboratoire d'Informatique Gaspard-Monge (LIGM) at ESIEE Paris, France.

Together with the main event from 26th to 28th March, a one-day pre-conference event on Discrete Topology and Mathematical Morphology in honor of the retirement of Gilles Bertrand and a half-day post-conference event on Computational Geometry are also planned.

The main topics of interest include (but are not limited to):

Models for Discrete Geometry,

Discrete and Combinatorial Topology,

Geometric Transforms,

Discrete Shape Representation, Recognition and Analysis,

Discrete Tomography,

Mathematical Morphological Analysis,

Discrete Modelling and Visualization,

Discrete and Combinatorial Tools for Image Segmentation and Analysis.

Both theoretical and applicative contributions related to these topics are welcome.

Submission deadlines

---------------

Title and abstract: 23.09.2018

Full paper: 01.10.2018

Proceedings

---------------

The proceedings of the conference are published in the Lecture Notes in Computer Science series by Springer.

Steering Committee

---------------

- David Coeurjolly (President of SC, CNRS, Université de Lyon, France)
- Eric Andres (Université de Poitiers, France)
- Gunilla Borgefors (Uppsala University, Sweden)
- Srečko Brlek (Université du Québec à Montreal, Canada)
- Isabelle Debled-Rennesson (Université de Lorraine, France)
- Andrea Frosini (Università di Firenze, Italy)
- María José Jiménez (Universitad de Sevilla, Spain)
- Bertrand Kerautret (Université de Lyon, France)
- Walter Kropatsch (Technische Universität Wien, Austria)
- Jacques-Olivier Lachaud (Université de Savoie, France)
- Nicolas Normand (Université de Nantes, France)

General chairs

---------------

- Michel Couprie (ESIEE Paris, Université Paris-Est, France)
- Jean Cousty (ESIEE Paris, Université Paris-Est, France)
- Yukiko Kenmochi (CNRS, Université Paris-Est, France)
- Nabil Mustafa (ESIEE Paris, Université Paris-Est, France)

Keynote speakers

---------------

- Alexandre Xavier Falcão - University of Campinas, Brazil (main event)
- Longin Jan Latecki - Temple University, USA (main event)
- János Pach - EPFL, Switzerland (main event)
- Tat Yung Kong - City University of New York, USA (pre-conference event)
- Philippe Salembier - Universitat Politècnica de Catalunya, Spain (pre-conference event)

Website and contact

---------------

https://dgci2019.sciencesconf.org

dgci2019@sciencesconf.org

]]>

DTMM2019

Workshop on Digital Topology and Mathematical Morphology

on the occasion of the retirement of Gilles Bertrand

ESIEE Paris

March 25th, 2019

url: https://dtmm-gb.sciencesconf.org

--------------------------------------------------------------

Call for contributions

----------------------

On the occasion of the retirement of Gilles Bertrand, a special

one-day workshop on digital topology and mathematical morphology is

organized. The workshop will take place at ESIEE Paris, Université

Paris-Est on March 25th, 2019. This workshop will be held as a

pre-conference event of the 21st Intenational Conference on Discrete

Geometry for Computer Imagery (DGCI 2019) which is also organized at

ESIEE Paris, Université Paris-Est on 26-28 March, 2019.

You are kindly invited to submit a proposition for a talk (around 20

minutes) or a poster, on a topic related to those on which Gilles made

his major contributions, namely Digital Topology (in a broad sense)

and discrete Mathematical Morphology.

The propositions will be in the form of a title and an abstract. The

contents may be an original contribution, or a survey on a particular

problem, or a recent result already published or submitted.

A booklet containing all the abstracts will be printed, given to the

audience of the workshop and made available on arXiv. Additional texts

(in the limit of 12 pages) are welcome, and will be included in the

booklet.

Registration is free but mandatory to attend the workshop.

KEYNOTE SPEAKERS:

-----------------

Gilles Bertrand, ESIEE, LIGM

Tat Yung Kong, City University of New York

Philippe Salembier, Universitat Politècnica de Catalunya, Barcelona

IMPORTANT DATES:

----------------

Deadline for talk proposal: January 15th, 2019

Notification of acceptance: January 25th, 2019

Deadline for registration: February 15th, 2019

Workshop: March 25th, 2019

WEBSITE AND CONTACT:

--------------------

https://dtmm-gb.sciencesconf.org

dtmm-gb@sciencesconf.org]]>

*The “Cloud” to “Things” Continuum*

Room 5257 (ESIEE-Paris)

Back in 2011, we introduced the concept of a multi-tier cloud as part of the “Smart Applications on Virtualized Infrastructure (SAVI)” NSERC Strategic Network Project. SAVI extends the traditional cloud computing environment into a two-tier cloud including smart edges – small to moderate size data centers located close to the end-users (e.g., service provider premises), and massive scale data centers with abundant high-performance computing resources typically located in remote areas. We designed the smart edge as a converged infrastructure that uses virtualization, cloud computing and network softwarization principles to support multiple network protocols, customizable network services, and high-bandwidth low latency applications. Since then the concept of a multi-tier cloud has been widely adopted by telecom operators and in initiatives such as the Mobile Edge Computing (MEC). In the meantime, the advent of the Internet of Things (IoT) has seen an explosive growth in the number of connected devices generating a large variety of data in high volumes at high velocities. The unique set of requirements posed by the IoT data demands innovation in the information infrastructure with the objective of minimizing latency and conserving bandwidth resources. The multi-tier cloud computing model proposed in SAVI falls short in addressing the needs of the IoT applications, since, most voluminous, heterogeneous and short-lived data will have to be processed and analyzed closer to IoT devices generating the data. Therefore, it is imperative that the future information infrastructure should incorporate more tiers (e.g., IoT gateways, customer premise equipments) into the multi-tier cloud to enable true at-scale end-to-end application orchestration. In this talk, we will discuss the research challenges in realizing the future information infrastructure that should be massively distributed to achieve scalability; highly interoperable for seamless interaction between different enabling technologies; highly flexible for collecting, fusing, mining, and processing IoT data; and easily programmable for service orchestration and application-enablement.

]]>*Watershed: how to learn it end-to-end, and how to use it in graph partitioning*

Room 3005 (ESIEE PARIS)

**Abstract: **In a first part, I will sketch how to realize an end-to-end learning of a segmentation pipeline involving a watershed computation [Wolf et al., "Learned Watershed", ICCV 2017].

In a second part, I will discuss ongoing work on instance segmentation, which can be cast as a graph partitioning problem. The majority of models developed in this context have relied on purely attractive interactions between graph nodes. To obtain more than a single cluster, it is then necessary to manually set a merging or splitting threshold, or to pre-specify a desired number of clusters.

A notable exception to the above is multicut partitioning / correlation clustering, which allows for repulsive in addition to attractive interactions, and which automatically determines an optimal number of clusters. Unfortunately, the multicut problem is NP-hard.

In response, we propose an objective function that allows for both repulsive and attractive interactions, but which we show can be solved to optimality by a greedy algorithm. At the time of writing, the new scheme gives the state of the art results on the ISBI connectomics challenge.

Joint work with Steffen Wolf, Constantin Pape, Nasim Rahaman, Alberto Bailoni, Anna Kreshuk, Ullrich Koethe.

]]>*Tree Containment With Soft Polytomies*

Bâtiment Lavoisier, salle LAV108

The Tree Containment problem has many important applications in the study of evolutionary history. Given a phylogenetic network N and a phylogenetic tree T whose leaves are labeled by a set of taxa, it asks if N and T are consistent. While the case of binary N and T has received considerable attention, the more practically relevant variant dealing with biological uncertainty has not. Such uncertainty manifests itself as high-degree vertices (“polytomies”) that are “jokers” in the sense that they are compatible with any binary resolution of their children.

Contrasting the binary case, we show that this problem, called Soft Tree Containment, is NP-hard, even if N is a binary, multi-labeled tree in which each taxon occurs at most thrice. On the other hand, we reduce the case that each label occurs at most twice to solving a 2-SAT instance of size O(|T|^3). This implies NP-hardness and polynomial-time solvability on reticulation-visible networks in which the maximum in-degree is bounded by three and two, respectively.

*Study of compressed stack algorithms in limited memory environment*

Room 260 (ESIEE PARIS)

**Abstract:** The need to run algorithms on limited-memory devices motivated our consideration for data structure in the settings where there is only a limited amount of memory available (apart from the input representation). We proposed a practical implementation of the theoretical work of Barba et al.. In their work, they introduce a class of algorithms called stack algorithms: algorithms that scan the input in a streaming fashion and have a stack as their space bottleneck. For those algorithms, Barba et al. introduce a new data structure, called compressed stack, that gives a general time-space trade-oﬀ: they can reduce the amount of memory used by the algorithm at the cost of increasing running time. Speciﬁcally, stack algorithms can run in O(n^(1+1/log_p(n) ) ) time using O(p.log_p(n) ) space for any parameter p ∈ {2,...,n} (instead of Θ(n) time and space when a normal stack is used).

All of our algorithms are available in C ++ (and Julia) under MIT licenses at https://github.com/Azzaare/CompressedStacks.cpp.git.

The article presenting this work is available on arxiv: https://arxiv.org/abs/1706.04708

Jean-Francois Baffier graduated Master course at University Paris XI in 2011 and got Ph.D. from the University of Tokyo in 2015. He was a member of the ERATO Kawarabayashi Large Project from May 2015 to August 2017 in Tokyo and Sendai.

His main research topic covers modeling of failures and routing in Networks. Other research topics involve Game analysis and AI for Games (in particular Starcraft), but also Local Search algorithm (HPC) and limited memory algorithms.

He is currently supported by the Japanese Society for the Promotion of Science as a JSPS-CNRS research fellow (Sept. 2017-2019) and hosted at the Tokyo Intistute of Technology (Japan).

]]>of Paris, is devoted to low-dimensional geometry and topology, from

both the viewpoints of mathematicians and computer scientists. It

is aimed at graduate students and researchers in mathematics and

computer science interested in geometric or topological aspects.

This includes, not exhaustively, mathematicians working in

differential, Riemannian, or topological geometry; and computer

scientists working in computational geometry or topology. The goal

is to foster interactions between these communities.

Main speakers (each giving one 90-minute lecture per day):

* Jeff Erickson (University of Illinois at Urbana-Champaign, USA):

Two-dimensional computational topology (tentative)

* Joel Hass (University of California at Davis, USA):

Algorithms and complexity in the theory of knots and manifolds

Additional speakers (giving a presentation):

* Bruno Benedetti (University of Miami, USA)

* Benjamin Burton (University of Queensland, Australia)

* Erin Chambers (Saint Louis University, USA)

* Gregory Chambers (Rice University, USA)

* Moira Chas (Stony Brooke University, USA)

* Francis Lazarus (CNRS, Gipsa-Lab, France)

* Hugo Parlier (University of Luxembourg)

* Saul Schleimer (University of Warwick, UK)

* Eric Sedgwick (DePaul University, USA)

* Uli Wagner (IST Austria)

* Yusu Wang (Ohio State University, USA)

Registration is free but mandatory, before March 15, 2018. We

expect to be able to provide accommodation for some students and,

depending on availability, post-doctoral researchers; see the

webpage for more details when they become available.

geomschool2018.univ-ml<wbr></wbr>v.fr/

Paris, Institut Henri Poincaré

June 18-22, 2018

Organizing committee:

* Éric Colin de Verdière (CNRS, LIGM, Université Paris-Est

Marne-la-Vallée)

* Xavier Goaoc (LIGM, Université Paris-Est Marne-la-Vallée)

* Laurent Hauswirth (LAMA, Université Paris-Est Marne-la-Vallée)

* Alfredo Hubard (LIGM, Université Paris-Est Marne-la-Vallée)

* Stéphane Sabourau (LAMA, Université Paris-Est Créteil)

This event is supported by Bézout Labex and Institut Henri Poincaré.

]]>*Self-supervision, Meta-supervision, Curiosity: Making Computers Study Harder*

Seminar room B412 of the IMAGINE group (ENPC - Bat. Coriolis)

**Abstract: **Computer vision has made impressive gains through the use of deep learning models, trained with large-scale labeled data. However, labels require expertise and curation and are expensive to collect. Even worse, direct semantic supervision often leads the learning algorithms "cheating" and taking shortcuts, instead of actually doing the work. In this talk, I will briefly summarize several of my group's efforts to combat this using self-supervision, meta-supervision, and curiosity — all ways of using the data as its own supervision. These lead to practical applications in image synthesis, image forensics, audio-visual source separation, etc.

*Mini-survey of additive combinatorics*

Bâtiment Lavoisier, salle LAV108

**Abstract: **Additive combinatorics studies how sets behave under the basic

arithmetic operations (addition, multiplication). The far-ranging

applicability of additive combinatorics throughout mathematics is

primarily via analysis of convolutions, by taking level sets.

In this lecture, I will survey some of the principal problems and results in the subject. The lecture will be followed by a mini-course for the interested.

*Multilinear compressive sensing and an application to convolutional linear networks*

Seminar room B412 of the IMAGINE group (ENPC - Bat. Coriolis)

**Abstract: **We study a deep linear network expressed under the form of a matrix factorization problem. It takes as input a matrix X obtained by multiplying K matrices (called factors and corresponding to the action of a layer). Each factor is obtained by applying a fixed linear operator to a vector of parameters satisfying constraints. In machine learning, the error between the product of the estimated factors and X (i.e. the reconstruction error) relates to the statistical risk.

We first evaluate how the Segre embedding and its inverse distort distances. Then, we show that any deep matrix factorization can be cast in a generic multilinear problem (that uses the Segre embedding). We call this method tensorial lifting. Using the tensorial lifting, we provide necessary and sufficient conditions for the identifiability of the factors (up to a scale rearrangement). We also provide a necessary and sufficient condition called Deep Null Space Property (because of the analogy with the usual Null Space Property in the compressed sensing framework) which guarantees that even an inaccurate optimization algorithm for the factorization stably recovers the factors. In the machine learning context, the analysis provides sharp conditions on the network topology under which the error on the parameters defining the factors (i.e. the stability of the recovered parameters) scales linearly with the reconstruction error (i.e. the risk). Therefore, under these conditions on the network topology, any successful learning tasks leads to stably defined and therefore explainable layers.

We illustrate the theory with a practical example where the deep factorization is a convolutional linear network.

]]>