Conference Proceedings
2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
V. Gripon, V. Skachek, W. J. Gross and M. Rabbat, "Random clique codes," in Proceedings of 7" International Symposium on Turbo Codes and Iterative Information Processing, Gothenburg, Sweden, pp. 121125, August 2012.
Manuscript.
Presentation.
2011
2010
C. Berrou and V. Gripon, "Coded Hopfield networks," in Proceedings of 6" International Symposium on Turbo Codes and Iterative Information Processing, Brest, France, pp. 15, September 2010.
Manuscript.
2009

In fewshot classification, the aim is to learn models able to discriminate classes using only a small number of labeled examples. In this context, works have proposed to introduce Graph Neural Networks (GNNs) aiming at exploiting the information contained in other samples treated concurrently, what is commonly referred to as the transductive setting in the literature. These GNNs are trained all together with a backbone feature extractor. In this paper, we propose a new method that relies on graphs only to interpolate feature vectors instead, resulting in a transductive learning setting with no additional parameters to train. Our proposed method thus exploits two levels of information: a) transfer features obtained on generic datasets, b) transductive information obtained from other samples to be classified. Using standard fewshot vision classification datasets, we demonstrate its ability to bring significant gains compared to other works.
Download manuscript.
Bibtex@inproceedings{GriPat2021,
author = {Yuqing Hu, Vincent Gripon and Stéphane
Pateux},
title = {Graphbased Interpolation of Feature
Vectors for Accurate FewShot Classification},
booktitle = {25th International Conference on
Pattern Recognition (ICPR)},
year = {2021},
pages = {81648171},
}

In machine learning, classifiers are typically susceptible to noise in the training data. In this work, we aim at reducing intraclass noise with the help of graph filtering to improve the classification performance. Considered graphs are obtained by connecting samples of the training set that belong to a same class depending on the similarity of their representation in a latent space. We show that the proposed graph filtering methodology has the effect of asymptotically reducing intraclass variance, while maintaining the mean. While our approach applies to all classification problems in general, it is particularly useful in fewshot settings, where intraclass noise can have a huge impact due to the small sample selection. Using standardized benchmarks in the field of vision, we empirically demonstrate the ability of the proposed method to slightly improve stateoftheart results in both cases of fewshot and standard classification.
Download manuscript.
Bibtex@inproceedings{HamLasHuDruPasGri2021,
author = {Mounia Hamidouche and Carlos Lassance and
Yuqing Hu and Lucas Drumetz and Bastien Pasdeloup and
Vincent Gripon},
title = {Improving Classification Accuracy with
Graph Filtering},
booktitle = {ArXiv Preprint},
year = {2021},
}

The field of Graph Signal Processing (GSP) has proposed tools to generalize harmonic analysis to complex domains represented through graphs. Among these tools are translations, which are required to define many others. Most works propose to define translations using solely the graph structure (i.e. edges). Such a problem is illposed in general as a graph conveys information about neighborhood but not about directions. In this paper, we propose to infer translations as edgeconstrained operations that make a supervised classification problem invariant using a deep learning framework. As such, our methodology uses both the graph structure and labeled signals to infer translations. We perform experiments with regular 2D images and abstract hyperlink networks to show the effectiveness of the proposed methodology in inferring meaningful translations for signals supported on graphs.
Download manuscript.
Bibtex@inproceedings{BaeDruGri2021,
author = {Raphael Baena and Lucas Drumetz and
Vincent Gripon},
title = {Inferring Graph Signal Translations as
Invariant Transformations for Classification Tasks},
booktitle = {ArXiv Preprint},
year = {2021},
}

Polar codes can theoretically achieve very competitive Frame Error Rates. In practice, their performance may depend on the chosen decoding procedure, as well as other parameters of the communication system they are deployed upon. As a consequence, designing efficient polar codes for a specific context can quickly become challenging. In this paper, we introduce a methodology that consists in training deep neural networks to predict the frame error rate of polar codes based on their frozen bit construction sequence. We introduce an algorithm based on Projected Gradient Descent that leverages the gradient of the neural network function to generate promising frozen bit sequences. We showcase on generated datasets the ability of the proposed methodology to produce codes more efficient than those used to train the neural networks, even when the latter are selected among the most efficient ones.
Download manuscript.
Bibtex@inproceedings{LéGri2021,
author = {Mathieu Léonardon and Vincent Gripon},
title = {Using Deep Neural Networks to Predict and
Improve the Performance of Polar Codes},
booktitle = {ArXiv Preprint},
year = {2021},
}

The robustness of classifiers has become a question of paramount importance in the past few years. Indeed, it has been shown that stateoftheart deep learning architectures can easily be fooled with imperceptible changes to their inputs. Therefore, finding good measures of robustness of a trained classifier is a key issue in the field. In this paper, we point out that averaging the radius of robustness of samples in a validation set is a statistically weak measure. We propose instead to weight the importance of samples depending on their difficulty. We motivate the proposed score by a theoretical case study using logistic regression, where we show that the proposed score is independent of the choice of the samples it is evaluated upon. We also empirically demonstrate the ability of the proposed score to measure robustness of classifiers with little dependence on the choice of samples in more complex settings, including deep convolutional neural networks and real datasets.
Download manuscript.
Bibtex@inproceedings{GirGriLöVer2021,
author = {Théo Giraudon and Vincent Gripon and
Matthias Löwe and Franck Vermet},
title = {Towards an Intrinsic Definition of
Robustness for a Classifier},
booktitle = {IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP)},
year = {2021},
pages = {40154019},
}

Convolutional Neural Networks (CNNs) are stateoftheart in numerous computer vision tasks such as object classification and detection. However, the large amount of parameters they contain leads to a high computational complexity and strongly limits their usability in budgetconstrained devices such as embedded devices. In this paper, we propose a combination of a new pruning technique and a quantization scheme that effectively reduce the complexity and memory usage of convolutional layers of CNNs, and replace the complex convolutional operation by a lowcost multiplexer. We perform experiments on the CIFAR10, CIFAR100 and SVHN and show that the proposed method achieves almost stateoftheart accuracy, while drastically reducing the computational and memory footprints. We also propose an efficient hardware architecture to accelerate CNN operations. The proposed hardware architecture is a pipeline and accommodates multiple layers working at the same time to speed up the inference process.
Download manuscript.
Bibtex@inproceedings{HacGriArzFarBen2020,
author = {Ghouthi Boukli Hacene and Vincent Gripon
and Matthieu Arzel and Nicolas Farrugia and Yoshua
Bengio},
title = {Quantized Guided Pruning for Efficient
Hardware Implementations of Convolutional Neural
Networks},
booktitle = {18th IEEE International New Circuits
and Systems Conference (NEWCAS)},
year = {2020},
pages = {206209},
}

In most cases deep learning architectures are trained disregarding the amount of operations and energy consumption. However, some applications, like embedded systems, can be resourceconstrained during inference. A popular approach to reduce the size of a deep learning architecture consists in distilling knowledge from a bigger network (teacher) to a smaller one (student). Directly training the student to mimic the teacher representation can be effective, but it requires that both share the same latent space dimensions. In this work, we focus instead on relative knowledge distillation (RKD), which considers the geometry of the respective latent spaces, allowing for dimensionagnostic transfer of knowledge. Specifically we introduce a graphbased RKD method, in which graphs are used to capture the geometry of latent spaces. Using classical computer vision benchmarks, we demonstrate the ability of the proposed method to efficiently distillate knowledge from the teacher to the student, leading to better accuracy for the same budget as compared to existing RKD alternatives.
Bibtex@inproceedings{LasBonHacGriTanOrt2020,
author = {Carlos Lassance and Myriam Bontonou and
Ghouthi Boukli Hacene and Vincent Gripon and Jian Tang
and Antonio Ortega},
title = {Deep geometric knowledge distillation with
graphs},
booktitle = {IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP)},
year = {2020},
pages = {84848488},
}

In many application domains such as computer vision, Convolutional Layers (CLs) are key to the accuracy of deep learning methods. However, it is often required to assemble a large number of CLs, each containing thousands of parameters, in order to reach stateoftheart accuracy, thus resulting in complex and demanding systems that are poorly fitted to resourcelimited devices. Recently, methods have been proposed to replace the generic convolution operator by the combination of a shift operation and a simpler 1x1 convolution. The resulting block, called Shift Layer (SL), is an efficient alternative to CLs in the sense it allows to reach similar accuracies on various tasks with faster computations and fewer parameters. In this contribution, we introduce Shift Attention Layers (SALs), which extend SLs by using an attention mechanism that learns which shifts are the best at the same time the network function is trained. We demonstrate SALs are able to outperform vanilla SLs (and CLs) on various object recognition benchmarks while significantly reducing the number of float operations and parameters for the inference.
Download manuscript.
Bibtex@inproceedings{HacLasGriCouBen2020,
author = {Ghouthi Boukli Hacene and Carlos Lassance
and Vincent Gripon and Matthieu Courbariaux and Yoshua
Bengio},
title = {Attention Based Pruning for Shift
Networks},
booktitle = {25th International Conference on
Pattern Recognition (ICPR)},
year = {2020},
pages = {40544061},
}

Neural networks have demonstrably achieved stateofthe art accuracy using lowbitlength integer quantization, yielding both execution time and energy benefits on existing hardware designs that support short bitlengths. However, the question of finding the minimum bitlength for a desired accuracy remains open. We introduce a training method for minimizing inference bitlength at any granularity while maintaining accuracy. Furthermore, we propose a regularizer that penalizes large bitlength representations throughout the architecture and show how it can be modified to minimize other quantifiable criteria, such as number of operations or memory footprint. We demonstrate that our method learns thrifty representations while maintaining accuracy. With ImageNet, the method produces an average per layer bitlength of 4.13 and 3.76 bits on AlexNet and ResNet18 respectively, remaining within 2.0% and 0.5% of the baseline TOP1 accuracy.
Bibtex@inproceedings{NikHacBanLasCouBenGriMos2020,
author = {Miloš Nikolić and Ghouthi Boukli Hacene
and Ciaran Bannon and Alberto Delmas Lascorz and
Matthieu Courbariaux and Yoshua Bengio and Vincent
Gripon and Andreas Moshovos},
title = {BitPruning: Learning Bitlengths for
Aggressive and Accurate Quantization},
booktitle = {ArXiv Preprint: 2002.03090},
year = {2020},
}

Neural networks have demonstrably achieved stateofthe art accuracy using lowbitlength integer quantization, yielding both execution time and energy benefits on existing hardware designs that support short bitlengths. However, the question of finding the minimum bitlength for a desired accuracy remains open. We introduce a training method for minimizing inference bitlength at any granularity while maintaining accuracy. Namely, we propose a regularizer that penalizes large bitlength representations throughout the architecture and show how it can be modified to minimize other quantifiable criteria, such as number of operations or memory footprint. We demonstrate that our method learns thrifty representations while maintaining accuracy. With ImageNet, the method produces an average per layer bitlength of 4.13, 3.76 and 4.36 bits on AlexNet, ResNet18 and MobileNet V2 respectively, remaining within 2.0%, 0.5% and 0.5% of the base TOP1 accuracy.
Download manuscript.
Bibtex@inproceedings{NikHacBanLasCouBenGriMos2020,
author = {Miloš Nikolić and Ghouthi Boukli Hacene
and Ciaran Bannon and Alberto Delmas Lascorz and
Matthieu Courbariaux and Yoshua Bengio and Vincent
Gripon and Andreas Moshovos},
title = {BitPruning: Learning Bitlengths for
Aggressive and Accurate Quantization},
booktitle = {ArXiv Preprint},
year = {2020},
}

Fewshot classification is a challenging problem due to the uncertainty caused by using few labelled samples. In the past few years, many methods have been proposed to solve fewshot classification, among which transferbased methods have proved to achieve the best performance. Following this vein, in this paper we propose a novel transferbased method that builds on two successive steps: 1) preprocessing the feature vectors so that they become closer to Gaussianlike distributions, and 2) leveraging this preprocessing using an optimaltransport inspired algorithm (in the case of transductive settings). Using standardized vision benchmarks, we prove the ability of the proposed methodology to achieve stateoftheart accuracy with various datasets, backbone architectures and fewshot settings.
Download manuscript.
Bibtex@inproceedings{HuGriPat2020,
author = {Yuqing Hu and Vincent Gripon and Stéphane
Pateux},
title = {Leveraging the Feature Distribution in
Transferbased FewShot Learning},
booktitle = {ArXiv Preprint},
year = {2020},
}

In the context of fewshot learning, one cannot measure the generalization ability of a trained classifier using validation sets, due to the small number of labeled samples. In this paper, we are interested in finding alternatives to answer the question: is my classifier generalizing well to previously unseen data? We first analyze the reasons for the variability of generalization performances. We then investigate the case of using transferbased solutions, and consider three settings: i) supervised where we only have access to a few labeled samples, ii) semisupervised where we have access to both a few labeled samples and a set of unlabeled samples and iii) unsupervised where we only have access to unlabeled samples. For each setting, we propose reasonable measures that we empirically demonstrate to be correlated with the generalization ability of considered classifiers. We also show that these simple measures can be used to predict generalization up to a certain confidence. We conduct our experiments on standard fewshot vision datasets.
Download manuscript.
Bibtex@inproceedings{BonBéGri2020,
author = {Myriam Bontonou and Louis Béthune and
Vincent Gripon},
title = {Predicting the Accuracy of a FewShot
Classifier},
booktitle = {ArXiv Preprint},
year = {2020},
}

Graphs are nowadays ubiquitous in the fields of signal processing and machine learning. As a tool used to express relationships between objects, graphs can be deployed to various ends: (i) clustering of vertices, (ii) semisupervised classification of vertices, (iii) supervised classification of graph signals, and (iv) denoising of graph signals. However, in many practical cases graphs are not explicitly available and must therefore be inferred from data. Validation is a challenging endeavor that naturally depends on the downstream task for which the graph is learnt. Accordingly, it has often been difficult to compare the efficacy of different algorithms. In this work, we introduce several easetouse and publicly released benchmarks specifically designed to reveal the relative merits and limitations of graph inference methods. We also contrast some of the most prominent techniques in the literature.
Download manuscript.
Bibtex@inproceedings{LasGriMat2020,
author = {Carlos Lassance and Vincent Gripon and
Gonzalo Mateos},
title = {Graph topology inference benchmarks for
machine learning},
booktitle = {IEEE 30th International Workshop on
Machine Learning for Signal Processing (MLSP)},
year = {2020},
pages = {16},
}

Fewshot learning addresses problems for which a limited number of training examples are available. So far, the field has been mostly driven by applications in computer vision. Here, we are interested in adapting recently introduced fewshot methods to solve problems dealing with neuroimaging data, a promising application field. To this end, we create a neuroimaging benchmark dataset for fewshot learning and compare multiple learning paradigms, including metalearning, as well as various backbone networks. Our experiments show that fewshot methods are able to efficiently decode brain signals using few examples, which paves the way for a number of applications in clinical and cognitive neuroscience, such as identifying biomarkers from brain scans or understanding the generalization of brain representations across a wide range of cognitive tasks.
Download manuscript.
Bibtex@inproceedings{BonLioFarGri2020,
author = {Myriam Bontonou and Giulia Lioi and
Nicolas Farrugia and Vincent Gripon},
title = {Fewshot Decoding of Brain Activation
Maps},
booktitle = {ArXiv Preprint},
year = {2020},
}

Fewshot classification is a challenge in machine learning where the goal is to train a classifier using a very limited number of labeled examples. This scenario is likely to occur frequently in real life, for example when data acquisition or labeling is expensive. In this work, we consider the problem of postlabeled fewshot unsupervised learning, a classification task where representations are learned in an unsupervised fashion, to be later labeled using very few annotated examples. We argue that this problem is very likely to occur on the edge, when the embedded device directly acquires the data, and the expert needed to perform labeling cannot be prompted often. To address this problem, we consider an algorithm consisting of the concatenation of transfer learning with clustering using SelfOrganizing Maps (SOMs). We introduce a TensorFlowbased implementation to speedup the process in multicore CPUs and GPUs. Finally, we demonstrate the effectiveness of the method using standard offtheshelf fewshot classification benchmarks.
Download manuscript.
Bibtex@inproceedings{KhaGriMir2020,
author = {Lyes Khacef and Vincent Gripon and Benoit
Miramond},
title = {GPUbased SelfOrganizing Maps for
PostLabeled FewShot Unsupervised Learning},
booktitle = {International Conference on Neural
Information Processing},
year = {2020},
pages = {404416},
}

Introduced in the late 80's for generalization purposes, pruning has now become a staple to compress deep neural networks. Despite many innovations brought in the last decades, pruning approaches still face core issues that hinder their performance or scalability. Drawing inspiration from early work in the field, and especially the use of weightdecay to achieve sparsity, we introduce Selective Weight Decay (SWD), which realizes efficient continuous pruning throughout training. Our approach, theoreticallygrounded on Lagrangian Smoothing, is versatile and can be applied to multiple tasks, networks and pruning structures. We show that SWD compares favorably to stateoftheart approaches in terms of performance/parameters ratio on the CIFAR10, Cora and ImageNet ILSVRC2012 datasets.
Download manuscript.
Bibtex@inproceedings{TesGriLéArzHanBer2020,
author = {Hugo Tessier and Vincent Gripon and
Mathieu Léonardon and Matthieu Arzel and Thomas
Hannagan and David Bertrand},
title = {Rethinking Weight Decay For Efficient
Neural Network Pruning},
booktitle = {ArXiv Preprint},
year = {2020},
}

Measuring the generalization performance of a Deep Neural Network (DNN) without relying on a validation set is a difficult task. In this work, we propose exploiting Latent Geometry Graphs (LGGs) to represent the latent spaces of trained DNN architectures. Such graphs are obtained by connecting samples that yield similar latent representations at a given layer of the considered DNN. We then obtain a generalization score by looking at how strongly connected are samples of distinct classes in LGGs. This score allowed us to rank 3rd on the NeurIPS 2020 Predicting Generalization in Deep Learning (PGDL) competition.
Download manuscript.
Bibtex@inproceedings{LasBéBonHamGri2020,
author = {Carlos Lassance and Louis Béthune and
Myriam Bontonou and Mounia Hamidouche and Vincent
Gripon},
title = {Ranking Deep Learning Generalization using
Label Variation in Latent Geometry Graphs},
booktitle = {ArXiv Preprint},
year = {2020},
}

Vision based localization is the problem of inferring the pose of the camera given a single image. One solution to this problem is to learn a deep neural network to infer the pose of a query image after learning on a dataset of images with known poses. Another more commonly used approach rely on image retrieval where the query image is compared against the database of images and its pose is inferred with the help of the retrieved images. The latter approach assumes that images taken from the same places consists of the same landmarks and, thus would have similar feature representations. These representation can be learned using full supervision to be robust to different variations in capture conditions like time of the day and weather. In this work, we introduce a framework to enhance the performance of these retrieval based localization methods by taking into account the additional information including GPS coordinates and temporal neighbourhood of the images provided by the acquisition process in addition to the descriptor similarity of pairs of images in the reference or query database which is used traditionally for localization. Our method constructs a graph based on this additional information and use it for robust retrieval by smoothing the feature representation of reference and/or query images. We show that the proposed method is able to significantly improve the localization accuracy on two large scale datasets over the baselines.
Bibtex@inproceedings{LasLatGarGriRei2019,
author = {Carlos Lassance and Yasir Latif and Ravi
Garg and Vincent Gripon and Ian Reid},
title = {Improved Visual Localization via Graph
Smoothing},
booktitle = {ArXiv Preprint},
year = {2019},
}

In this paper, we tackle the problem of incrementally learning a classifier, one example at a time, directly on chip. To this end, we propose an efficient hardware implementation of a recently introduced incremental learning procedure that achieves stateoftheart performance by combining transfer learning with majority votes and quantization techniques. The proposed design is able to accommodate for both new examples and new classes directly on the chip. We detail the hardware implementation of the method (implemented on FPGA target) and show it requires limited resources while providing a significant acceleration compared to using a CPU.
Bibtex@inproceedings{HacGriFarArzJez2019,
author = {Ghouthi Boukli Hacene and Vincent Gripon
and Nicolas Farrugia and Matthieu Arzel and Michel
Jezequel},
title = {Efficient Hardware Implementation of
Incremental Learning and Inference on Chip},
booktitle = {17th IEEE International New Circuits
and Systems Conference (NEWCAS)},
year = {2019},
pages = {14},
}

In this paper, we introduce a novel layer designed to be used as the output of pretrained neural networks in the context of classification. Based on Associative Memories, this layer can help design Deep Neural Networks which support incremental learning and that can be (partially) trained in real time on embedded devices. Experiments on the ImageNet dataset and other different domain specific datasets show that it is possible to design more flexible and fastertotrain Neural Networks at the cost of a slight decrease in accuracy.
Download manuscript.
Bibtex@inproceedings{JodGriHag2019,
author = {Quentin Jodelet and Vincent Gripon and
Masafumi Hagiwara},
title = {Transfer Learning with Sparse Associative
Memories},
booktitle = {International Conference on Artificial
Neural Networks},
year = {2019},
pages = {497512},
}

Because deep neural networks (DNNs) rely on a large number of parameters and computations, their implementation in energyconstrained systems is challenging. In this paper, we investigate the solution of reducing the supply voltage of the memories used in the system, which results in bitcell faults. We explore the robustness of stateoftheart DNN architectures towards such defects and propose a regularizer meant to mitigate their effects on accuracy. Our experiments clearly demonstrate the interest of operating the system in a faulty regime to save energy without reducing accuracy.
Download manuscript.
Bibtex@inproceedings{HacLedSouGriGag201908,
author = {Ghouthi Boukli Hacene and Francois
LeducPrimeau and Amal Ben Soussia and Vincent Gripon
and Francois Gagnon},
title = {Robustesse des réseaux de neurones
profonds aux défaillances mémoire},
booktitle = {GRETSI},
year = {2019},
month = {August},
}

Nous proposons de passer en revue quelquesuns des principaux modèles utilisés pour la classification supervisée de signaux sur graphe avec de l’apprentissage profond. L’objectif de la classification supervisée de signaux sur graphe est de classer un signal dont les composantes sont définies sur les sommets d’un graphe. Le réseau de neurones convolutif (CNN) est très performant pour classer des signaux définis sur un graphe « grille » (comme les images). Cependant, comme il ne peut pas être utilisé sur des signaux définis sur un graphe quelconque, d’autres modèles sont apparus, essayant d’appliquer ses propriétés à n’importe quel graphe. L’objectif général de cette étude est de comparer certains des principaux modèles de classification supervisée des signaux sur graphe. Nous proposons également un formalisme unifié.
Download manuscript.
Bibtex@inproceedings{BonLasViaGri201908,
author = {Myriam Bontonou and Carlos Lassance and
JeanCharles Vialatte and Vincent Gripon},
title = {Un modèle unifié pour la classification
de signaux sur graphe avec de l’apprentissage
profond},
booktitle = {GRETSI},
year = {2019},
month = {August},
}

Deep Networks have been shown to provide stateoftheart performance in many machine learning challenges. Unfortunately, they are susceptible to various types of noise, including adversarial attacks and corrupted inputs. In this work we introduce a formal definition of robustness which can be viewed as a localized Lipschitz constant of the network function, quantified in the domain of the data to be classified. We compare this notion of robustness to existing ones, and study its connections with methods in the literature. We evaluate this metric by performing experiments on various competitive vision datasets.
Download manuscript.
Bibtex@inproceedings{LasGriTanOrt201908,
author = {Carlos Lassance and Vincent Gripon and
Jian Tang and Antonio Ortega},
title = {Robustesse structurelle des architectures
d’apprentissage profond},
booktitle = {GRETSI},
year = {2019},
month = {August},
}

Predicting the future of Graphsupported Time Series (GTS) is a key challenge in many domains, such as climate monitoring, finance or neuroimaging. Yet it is a highly difficult problem as it requires to account jointly for time and graph (spatial) dependencies. To simplify this process, it is common to use a twostep procedure in which spatial and time dependencies are dealt with separately. In this paper, we are interested in comparing various linear spatial representations, namely structurebased ones and datadriven ones, in terms of how they help predict the future of GTS. To that end, we perform experiments with various datasets including spontaneous brain activity and raw videos.
Download manuscript.
Bibtex@inproceedings{BonLasGriFar20198,
author = {Myriam Bontonou and Carlos Lassance and
Vincent Gripon and Nicolas Farrugia},
title = {Comparing linear structurebased and
datadriven latent spatial representations for
sequence prediction},
booktitle = {Wavelets and Sparsity XVIII},
year = {2019},
address = {San Diego, USA},
month = {August},
}

Deep Networks have been shown to provide stateoftheart performance in many machine learning challenges. Unfortunately, they are susceptible to various types of noise, including adversarial attacks and corrupted inputs. In this work we introduce a formal definition of robustness which can be viewed as a localized Lipschitz constant of the network function, quantified in the domain of the data to be classified. We compare this notion of robustness to existing ones, and study its connections with methods in the literature. We evaluate this metric by performing experiments on various competitive vision datasets.
Download manuscript.
Bibtex@inproceedings{LasGriTanOrt20196,
author = {Carlos Lassance and Vincent Gripon and
Jian Tang and Antonio Ortega},
title = {Structural Robustness for Deep Learning
Architectures},
booktitle = {Data Science Workshop},
year = {2019},
pages = {125129},
month = {June},
}

We introduce a novel loss function for training deep learning architectures to perform classification. It consists in minimizing the smoothness of label signals on similarity graphs built at the output of the architecture. Equivalently, it can be seen as maximizing the distances between the network function images of training inputs from distinct classes. As such, only distances between pairs of examples in distinct classes are taken into account in the process, and the training does not prevent inputs from the same class to be mapped to distant locations in the output domain. We show that this loss leads to similar performance in classification as architectures trained using the classical crossentropy, while offering interesting degrees of freedom and properties. We also demonstrate the interest of the proposed loss to increase robustness of trained architectures to deviations of the inputs.
Download manuscript.
Bibtex@inproceedings{BonLasHacGriTanOrt20196,
author = {Myriam Bontonou and Carlos Lassance and
Ghouthi Boukli Hacene and Vincent Gripon and Jian Tang
and Antonio Ortega},
title = {Introducing Graph Smoothness Loss for
Training Deep Learning Architectures},
booktitle = {Data Science Workshop},
year = {2019},
pages = {160164},
month = {June},
}

Because deep neural networks (DNNs) rely on a large number of parameters and computations, their implementation in energyconstrained systems is challenging. In this paper, we investigate the solution of reducing the supply voltage of the memories used in the system, which results in bitcell faults. We explore the robustness of stateoftheart DNN architectures towards such defects and propose a regularizer meant to mitigate their effects on accuracy. Our experiments clearly demonstrate the interest of operating the system in a faulty regime to save energy without reducing accuracy.
Bibtex@inproceedings{HacLedSouGriGag20195,
author = {Ghouthi Boukli Hacene and François
LeducPrimeau and Amal Ben Soussia and Vincent Gripon
and François Gagnon},
title = {Training modern deep neural networks for
memoryfault robustness},
booktitle = {Proceedings of the IEEE International
Symposium on Circuits and Systems},
year = {2019},
pages = {15},
month = {May},
}

Convolutional Neural Networks are very efficient at processing signals defined on a discrete Euclidean space (such as images). However, as they can not be used on signals defined on an arbitrary graph, other models have emerged, aiming to extend its properties. We propose to review some of the major deep learning models designed to exploit the underlying graph structure of signals. We express them in a unified formalism, giving them a new and comparative reading.
Download manuscript.
Bibtex@inproceedings{BonLasViaGri20195,
author = {Myriam Bontonou and Carlos Lassance and
JeanCharles Vialatte and Vincent Gripon},
title = {A Unified Deep Learning Formalism For
Processing Graph Signals},
booktitle = {SDM Special Session on Graph Neural
Networks},
year = {2019},
month = {May},
}

In the past few years, Graph Signal Processing (GSP) has attracted a lot of interest for its aim at extending Fourier analysis to arbitrary discrete topologies described by graphs. Since it is essentially built upon analogies between classical temporal Fourier transforms and ring graphs spectrum, these extensions do not necessarily yield expected convolution and translation operators when adapted on regular multidimensional domains such as 2D grid graphs. In this paper we are interested in alternate definitions of Fourier transforms on graphs, obtained by projecting vertices to regular metric spaces on which the Fourier transform is already well defined. We compare our method with classical graph Fourier transform and demonstrate its interest for designing accurate convolutional neural networks on graph signals.
Download manuscript.
Bibtex@inproceedings{GreLasDupGri2018,
author = {Nicolas Grelier and Carlos Rosar Kos
Lassance and Elsa Dupraz and Vincent Gripon},
title = {GraphProjected Signal Processing},
booktitle = {IEEE GlobalSIP},
year = {2018},
pages = {763767},
}

We propose an extension of Convolutional Neural Networks (CNNs) to graphstructured data, including strided convolutions and data augmentation on graphs. Our method matches the accuracy of stateoftheart CNNs when applied on images, without any prior about their 2D regular structure. On fMRI data, we obtain a significant gain in accuracy compared with existing graphbased alternatives.
Download manuscript.
Bibtex@inproceedings{LasViaGri2018,
author = {Carlos Eduardo Rosar Kos Lassance and
JeanCharles Vialatte and Vincent Gripon},
title = {Matching Convolutional Neural Networks
without Priors about Data},
booktitle = {Proceedings of Data Science Workshop},
year = {2018},
pages = {234238},
}

Transfer learning using deep neural networks as feature extractors has become increasingly popular over the past few years. It allows to obtain stateoftheart accuracy on datasets too small to train a deep neural network on its own, and it provides cutting edge descriptors that, combined with nonparametric learning methods, allow rapid and flexible deployment of performing solutions in computationally restricted settings. In this paper, we are interested in showing that the features extracted using deep neural networks have specific properties which can be used to improve accuracy of downstream nonparametric learning methods. Namely, we demonstrate that for some distributions where information is embedded in a few coordinates, segmenting feature vectors can lead to better accuracy. We show how this model can be applied to real datasets by performing experiments using three mainstream deep neural network feature extractors and four databases, in vision and audio.
Download manuscript.
Bibtex@inproceedings{GriHacLöVer20184,
author = {Vincent Gripon and Ghouthi Boukli Hacene
and Matthias Löwe and Franck Vermet},
title = {Improving Accuracy of Nonparametric
Transfer Learning Via Vector Segmentation},
booktitle = {proceedings of ICASSP},
year = {2018},
pages = {29662970},
month = {April},
}

Deep Neural Networks (DNNs) are stateoftheart in many machine learning benchmarks. Understanding how they perform is a major open question. In this paper, we are interested in using graph signal processing to monitor the intermediate representations obtained in a simple DNN architecture. We compare different metrics and measures and show that smoothness of label signals on knearest neighbor graphs are a good candidate to interpret individual layers role in achieving good performance.
Download manuscript.
Bibtex@inproceedings{GriOrtGir20182,
author = {Vincent Gripon and Antonio Ortega and
Benjamin Girault},
title = {An Inside Look at Deep Neural Networks
using Graph Signal Processing},
booktitle = {Proceedings of ITA},
year = {2018},
pages = {19},
month = {February},
}

Thanks to their ability to absorb large amounts of data, Convolutional Neural Networks (CNNs) have become stateoftheart in numerous vision challenges, sometimes even on par with biological vision. They rely on optimisation routines that typically require intensive computational power, thus the question of embedded architectures is a very active field of research. Of particular interest is the problem of incremental learning, where the device adapts to new observations or classes. To tackle this challenging problem, we propose to combine pretrained CNNs with binary associative memories, using product random sampling as an intermediate between the two methods. The obtained architecture requires significantly less computational power and memory usage than existing counterparts. Moreover, using various challenging vision datasets we show that the proposed architecture is able to perform oneshot learning – and even use only a small portion of the dataset – while keeping very good accuracy.
Download manuscript.
Bibtex@inproceedings{HacGriFarArzJez2017,
author = {Ghouthi Boukli Hacene and Vincent Gripon
and Nicolas Farrugia and Matthieu Arzel and Michel
Jezequel},
title = {Budget Restricted Incremental Learning with
PreTrained Convolutional Neural Networks and Binary
Associative Memories},
booktitle = {Proceedings of SIPS},
year = {2017},
pages = {10631073},
}

G. B. Hacene, V. Gripon, N. Farrugia, M. Arzel and M. Jezequel, "Incremental Learning on Chip," in Proceedings of GlobalSip, pp. 789792, 2017.
Learning on chip (LOC) is a challenging problem in which an embedded system learns a model and uses it to process and classify unknown data, while adapting to new observations or classes. It may require intensive computational power to adapt to new data, leading to a complex hardware implementation. We address this issue by introducing an incremental learning method based on the combination of a pretrained Convolutional Neural Network (CNN) and majority votes, using Product Quantizing (PQ) as a bridge between them. We detail a hardware implementation of the proposed method (validated on a FPGA target) using limited hardware resources while providing substantial processing acceleration compared to a CPU counterpart.
Download manuscript.
Bibtex@inproceedings{HacGriFarArzJez2017,
author = {Ghouthi Boukli Hacene and Vincent Gripon
and Nicolas Farrugia and Matthieu Arzel and Michel
Jezequel},
title = {Incremental Learning on Chip},
booktitle = {Proceedings of GlobalSip},
year = {2017},
pages = {789792},
}

Graph Signal Processing (GSP) is a promising framework to analyze multidimensional neuroimaging datasets, while taking into account both the spatial and functional dependencies between brain signals. In the present work, we apply dimensionality reduction techniques based on graph representations of the brain to decode brain activity from real and simulated fMRI datasets. We introduce seven graphs obtained from a) geometric structure and/or b) functional connectivity between brain areas at rest, and compare them when performing dimension reduction for classification. We show that mixed graphs using both a) and b) offer the best performance. We also show that graph sampling methods perform better than classical dimension reduction including Principal Component Analysis (PCA) and Independent Component Analysis (ICA).
Download manuscript.
Bibtex@inproceedings{MéFarPasGri2017,
author = {Mathilde Ménoret and Nicolas Farrugia and
Bastien Pasdeloup and Vincent Gripon},
title = {Evaluating Graph Signal Processing for
Neuroimaging Through Classification and Dimensionality
Reduction},
booktitle = {Proceedings of GlobalSip},
year = {2017},
pages = {618622},
}

We propose a generic layer formulation that extends the properties of convolutional layers to any domain that can be described by a graph topology. Namely, we use the support of its adjacency matrix to design learnable weight sharing filters able to exploit the underlying structure of signals in the same fashion as for images. The proposed formulation makes it possible to learn the weights of the filter as well as a scheme that controls how they are shared across the graph. We perform validation experiments with image datasets and show that these filters offer performances comparable with convolutional ones.
Download manuscript.
Bibtex@inproceedings{ViaGriCop2017,
author = {JeanCharles Vialatte and Vincent Gripon
and Gilles Coppin},
title = {Learning Local Receptive Fields and their
Weight Sharing Scheme on Graphs},
booktitle = {Proceedings of GlobalSip},
year = {2017},
pages = {623627},
}

For the past few years, the domain of graph signal processing has extended classical Fourier analysis to domains described by graphs. Most of the results were obtained by analogy with the study of heat propagation. We propose to perform a similar analysis in the context of tropical algebra, widely used in theoretical computer science to monitor propagation processes over graphs of distances. We introduce a Tropical Graph Fourier Transform and prove a few results on graph inference and the existence of a tropical uncertainty principle.
Download manuscript.
Bibtex@inproceedings{Gri201710,
author = {Vincent Gripon},
title = {Tropical Graph Signal Processing},
booktitle = {Proceedings of the Asilomar
conference},
year = {2017},
pages = {5054},
month = {October},
}

The brain is a noisy system subject to energy constraints. These facts are rarely taken into account when modelling artificial neural networks. In this paper, we are interested in demonstrating that those factors can actually lead to the appearance of robust associative memories. We first propose a simplified model of noise in the brain, taking into account synaptic noise and interference from neurons external to the network. When coarsely quantized, we show that this noise can be reduced to insertions and erasures. We take a neural network with recurrent modifiable connections, and subject it to noisy external inputs. We introduce an energy usage limitation principle in the network as well as consolidated Hebbian learning, resulting in an incremental processing of inputs. We show that the connections naturally formed correspond to stateoftheart binary sparse associative memories.
Download manuscript.
Bibtex@inproceedings{CoyGriLanBer201709,
author = {Eliott Coyac and Vincent Gripon and
Charlotte Langlais and Claude Berrou},
title = {Robust Associative Memories Naturally
Occuring From Recurrent Hebbian Networks Under Noise},
booktitle = {Arxiv Preprint},
year = {2017},
month = {September},
}

Abstract—Artificial neural networks are socalled because they are supposed to be inspired from the brain and from the ways the neurons work. While some networks are used purely for computational purpose and do not endeavor to be a plausible representation of what happens in the brain, such as deep learning neural networks, others do. However, the question of the noise in the brain and its impact on the functioning of those networks has been littlestudied. For example, it is widely known that synapses misfire with a significant probability. We model this noise and study its impact on associative memories powered by neural networks: neural clique networks and Hopfield networks as a reference point. We show that synaptic noise can in fact slightly improve the performance of the decoding process of neural clique networks by avoiding local minima.
Download manuscript.
Bibtex@inproceedings{CoyGriLanBer201702,
author = {Eliott Coyac and Vincent Gripon and
Charlotte Langlais and Claude Berrou},
title = {Performance of Neural Clique Networks
Subject to Synaptic Noise},
booktitle = {Proceedings of Cognitive},
year = {2017},
pages = {49},
month = {February},
}

The most efficient architectures of associative memories are based on binary neural networks. As example, Sparse Clustered Networks (SCNs) are able to achieve almost optimal memory efficiency while providing robust indexation of pieces of information through cliques in a neural network. In the canonical formulation of the associative memory problem, the unique stored message matching a given input probe is to be retrieved. In this paper, we focus on the more general problem of finding all messages matching the given probe. We consider real datasets from which many different messages can match given probes, which cannot be done with uniformly distributed messages due to their unlikelyhood of sharing large common parts with one another. Namely, we implement a crossword dictionary containing 8letter english words, and a chess endgame dataset using associative memories based on binary neural networks. We explain how to adapt SCNs’ architecture to this challenging dataset and introduce a backtracking procedure to retrieve all completions of the given input. We stress the performance of the proposed method using different measures and discuss the importance of parameters.
Download manuscript.
Bibtex@inproceedings{HacGriFarArzJez201702,
author = {Ghouthi Boukli Hacene and Vincent Gripon
and Nicolas Farrugia and Matthieu Arzel and Michel
Jezequel},
title = {Finding All Matches in a Database using
Binary Neural Networks},
booktitle = {Proceedings of Cognitive},
year = {2017},
pages = {5964},
month = {February},
}

In order to perform well in practice, Recurrent Neural Networks (RNN) require computationally heavy architectures, such as Gated Recurrent Unit (GRU) or Long Short Term Memory (LSTM). Indeed, the original Vanilla model fails to encapsulate middle and long term sequential dependencies. The aim of this paper is to show that gradient training issues, which have motivated the introduction of LSTM and GRU models, are not sufficient to explain the failure of the simplest RNN. Using the example of Reber’s grammar, we propose an experimental measure of both Vanilla and GRU models, which suggest an intrinsic difference in their dynamics. A better mathematical understanding of this difference could lead to more efficient models without compromising performance.
Download manuscript.
Bibtex@inproceedings{StFarGri201702,
author = {Tristan Stérin and Nicolas Farrugia and
Vincent Gripon},
title = {An Intrinsic Difference Between Vanilla
RNNs and GRU Models},
booktitle = {Proceedings of Cognitive},
year = {2017},
pages = {7681},
month = {February},
}

In many domains (e.g. Internet of Things, neuroimaging) signals are naturally supported on graphs. These graphs usually convey information on similarity between the values taken by the signal at the corresponding vertices. An interest of using graphs is that it allows to define ad hoc operators to perform signal processing. Among them, ones of paramount importance in many tasks are translations. In this paper we propose new definitions of translations on graphs using a few simple properties. Namely we propose to define translations as functions from vertices to adjacent ones, that preserve neighborhood properties of the graph. We show that our definitions, contrary to other works on the subject, match usual translations on grid graphs.
Bibtex@inproceedings{GrePasViaGri201610,
author = {Nicolas Grelier and Bastien Pasdeloup and
JeanCharles Vialatte and Vincent Gripon},
title = {NeighborhoodPreserving Translations on
Graphs},
booktitle = {Proceedings of GlobalSIP},
year = {2016},
pages = {410414},
month = {October},
}

E. Coyac, V. Gripon, C. Langlais and C. Berrou, "Distributed Coding and Synaptic Pruning," in Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 206210, September 2016.
Abstract—This paper deals with the modelization of synaptic pruning in the developing brain, at the informational level. Relying on the hypotheses of cliquebased memory and Hebbian learning, we consider several scenarios to try to understand how reliable pointtopoint communication may be achieved in the cortex whereas at birth, neurons are connected to a multitude of other neurons, similar to a connection of broadcast type. It is shown that quasiperfect transfer of information can be obtained in a plausible way using simple rules if a mechanism of synaptic normalization is implemented at the receiver side.
Download manuscript.
Bibtex@inproceedings{CoyGriLanBer20169,
author = {Eliott Coyac and Vincent Gripon and
Charlotte Langlais and Claude Berrou},
title = {Distributed Coding and Synaptic Pruning},
booktitle = {Proceedings of the 9th International
Symposium on Turbo Codes and Iterative Information
Processing},
year = {2016},
pages = {206210},
month = {September},
}

Establishing correspondences between image features is a fundamental problem in many computer vision tasks. It is traditionally viewed as a graph matching problem, and solved using an optimization procedure. In this paper, we propose a new approach to solving the correspondence problem from a coding/decoding perspective. We then present an iterative matching algorithm inspired from the turbodecoding concept. We provide an experimental evaluation of the proposed method, and show that it performs better than stateoftheart algorithms in the presence of clutter, thanks to turbostyle decoding.
Download manuscript.
Bibtex@inproceedings{AboGriCop20169,
author = {Ala Aboudib and Vincent Gripon and Gilles
Coppin},
title = {A TurboInspired Iterative Approach for
Correspondence Problems of Image Features},
booktitle = {Proceedings of the 9th International
Symposium on Turbo Codes and Iterative Information
Processing},
year = {2016},
pages = {226230},
month = {September},
}

P. Tigréat, C. R. K. Lassance, X. Jiang, V. Gripon and C. Berrou, "Assembly Output Codes for Learning Neural Networks," in Proceedings of the 9th International Symposium on Turbo Codes and Iterative Information Processing, pp. 285289, September 2016.
Neural networkbased classifiers usually encode the class labels of input data via a completely disjoint code, i.e. a binary vector with only one bit associated with each category. We use coding theory to propose assembly codes where each element is associated with several classes, making for better target vectors. These codes emulate the combination of several classifiers, which is a wellknown method to improve decision accuracy. Our experiments on datasets such as MNIST with a multilayer neural network show that assembly output codes, which are characterized by a higher minimum Hamming distance, result in better classification performance. These codes are also well suited to the use of clustered cliquebased networks in category representation.
Download manuscript.
Bibtex@inproceedings{TigLasJiaGriBer20169,
author = {Philippe Tigréat and Carlos Rosar Kos
Lassance and Xiaoran Jiang and Vincent Gripon and
Claude Berrou},
title = {Assembly Output Codes for Learning Neural
Networks},
booktitle = {Proceedings of the 9th International
Symposium on Turbo Codes and Iterative Information
Processing},
year = {2016},
pages = {285289},
month = {September},
}

The problem of finding nearest neighbours in terms of Euclidean distance, Hamming distance or other distance metric is a very common operation in computer vision and pattern recognition. In order to accelerate the search for the nearest neighbour in large collection datasets, many methods rely on the coarsefine approach. In this paper we propose to combine Product Quantization (PQ) and binary neural associative memories to perform the coarse search. Our motivation lies in the fact that neural network dimensions of the representation associated with a set of k vectors is independent of k. We run experiments on TEXMEX SIFT1M and MNIST databases and observe significant improvements in terms of complexity of the search compared to raw PQ.
Download manuscript.
Bibtex@inproceedings{FerGriJia201607,
author = {Demetrio Ferro and Vincent Gripon and
Xiaoran Jiang},
title = {Nearest Neighbour Search Using Binary
Neural Networks},
booktitle = {Proceedings of IJCNN},
year = {2016},
pages = {51065112},
month = {July},
}

Measuring similarity among graphs is recognized as a nontrivial problem. Most of the algorithms proposed so far ignore the spatial location of vertices, which is a crucial factor in the context of brain networks. In this paper, we present a novel algorithm, called “SimNet”, for measuring the similarity between two graphs whose vertices represent the position of sources over the cortex. The novelty is to account for differences at the level of spatiallyregistered vertices and edges. Simulated graphs are used to evaluate the algorithm performance and to compare it with methods reported elsewhere. Results show that SimNet is able to quantify the similarity between two graphs under a spatial constraint based on the 3D location of edges. The application of SimNet on real data (dense EEG) reveals the presence of spatiallydifferent brain networks modules activating during cognitive activity.
Download manuscript.
Bibtex@inproceedings{MheHasWenKhaDufGriBer2015,
author = {A. Mheich and M. Hassan and F. Wendling
and M. Khalil and O. Dufor and V. Gripon and C.
Berrou},
title = {SimNet: A new algorithm for measuring brain
networks similarity},
booktitle = {Proceedings of the ICABME international
conference},
year = {2015},
pages = {119122},
}

Signal processing on graphs is a recent research domain that aims at generalizing classical tools in signal processing, in order to analyze signals evolving on complex domains. Such domains are represented by graphs, for which one can compute a particular matrix, called the normalized Laplacian [3]. It was shown that the eigenvalues of this Laplacian correspond to the frequencies of the Fourier domain in classical signal processing [2]. Therefore, the frequential domain is not the same for every support graph. A consequence of this is that there is no nontrivial generalization of Heisenberg’s uncertainty principle, that states that a signal cannot be localized both in the time domain and in the frequency domain. A way to generalize this principle, introduced by Agaskar & Lu in [1], consists in determining a curve that represents a lower bound on the compromise between precision in the graph domain and precision in the spectral domain. The aim of this paper is to propose a characterization of the signals achieving this curve, for a larger class of graphs than the one studied by Agaskar & Lu.
Download manuscript.
Bibtex@inproceedings{PasGriMerPas2015,
author = {Bastien Pasdeloup and Vincent Gripon and
Grégoire Mercier and Dominique Pastor},
title = {Vers une caractérisation de la courbe
d'incertitude pour des graphes portant des signaux},
booktitle = {Proceedings of the GRETSI conference},
year = {2015},
}

Associative memories are capable of retrieving a message previously stored when an incomplete version of this message is presented. A model of associative memory based on binary neurons and binary connections, named Clustered Neural Network, has been recently introduce. The performance of this model drops when the stored message distribution is nonuniform. The goal of this paper, is to propose a new model of associative memory inspired by Clustered Neural Network and Restricted Boltzmann Machine, in order to decrease the vulnerability to nonuniform distribution.In addition, a fully parallel hardware design of the model. The proposed implementation multiplies the number of stored messages by a factor of 3 with an increase of complexity of 40%.
Download manuscript.
Bibtex@inproceedings{DanGriCouCon2015,
author = {Robin Danilo and Vincent Gripon and
Philippe Coussy and Laura CondeCanencia},
title = {Réseaux de Clusters de Neurones
Restreints},
booktitle = {Proceedings of the GRETSI conference},
year = {2015},
}

Artificial neural networks are inspired by biological neural networks present in the brain, and biological plausibility is often used as an argument to validate or criticize a neural network proposal. However, the brain is a system with a lot of interferences and the behaviour of neural networks with respect to this noise has not often been studied. This paper introduces a model to represent noise inside the brain, and studies how neural clique networks respond to that noise. It is shown that the noise can improve the neural clique network performance by avoiding local minima. We also show the impact of this noise on the widelyknown Hopfield networks.
Download manuscript.
Bibtex@inproceedings{CoyGriLan2015,
author = {Eliott Coyac and Vincent Gripon and
Charlotte Langlais},
title = {Impact du bruit synaptique sur les
performances des réseaux de cliques neurales},
booktitle = {Proceedings of the GRETSI conference},
year = {2015},
}

Signal processing on graphs has received a lot of attention in the recent years. A lot of techniques have arised, inspired by classical signal processing ones, to allow studying signals on any kind of graph. A common aspect of these technique is that they require a graph correctly modeling the studied support to explain the signals that are observed on it. However, in many cases, such a graph is unavailable or has no real physical existence. An example of this latter case is a set of sensors randomly thrown in a field which obviously observe related information. To study such signals, there is no intuitive choice for a support graph. In this document, we address the problem of inferring a graph structure from the observation of signals, under the assumption that they were issued of the diffusion of initially i.i.d. signals. To validate our approach, we design an experimental protocol, in which we diffuse signals on a known graph. Then, we forget the graph, and show that we are able to retrieve it very precisely from the only knowledge of the diffused signals.
Download manuscript.
Bibtex@inproceedings{PasRabGriPasMer201510,
author = {Bastien Pasdeloup and Michael Rabbat and
Vincent Gripon and Dominique Pastor and Gregoire
Mercier},
title = {Graph Reconstruction from the Observation
of Diffused Signals},
booktitle = {Proceedings of the 53rd Allerton
Conference},
year = {2015},
pages = {13861390},
month = {October},
}

The uncertainty principle states that a signal cannot be localized both in time and frequency. With the aim of extending this result to signals on graphs, Agaskar & Lu introduce notions of graph and spectral spreads. They show that a graph uncertainty principle holds for some families of unweighted graphs. This principle states that a signal cannot be simultaneously localized both in graph and spectral domains. In this paper, we aim to extend their work to weighted graphs. We show that a naive extension of their definitions leads to inconsistent results such as discontinuity of the graph spread when regarded as a function of the graph structure. To circumvent this problem, we propose another definition of graph spread that relies on an inverse similarity matrix. We also discuss the choice of the distance function that appears in this definition. Finally, we compute and plot uncertainty curves for families of weighted graphs.
Bibtex@inproceedings{PasAlaGriRab201507,
author = {Bastien Pasdeloup and Réda Alami and
Vincent Gripon and Michael Rabbat},
title = {Toward an uncertainty principle for
weighted graphs},
booktitle = {Proceedings of the 23rd European Signal
Processing Conference},
year = {2015},
pages = {14961500},
month = {July},
}

Associative memories are capable of retrieving previously stored patterns given parts of them. This feature makes them good candidates for pattern detection in images. Clustered Neural Networks is a recentlyintroduced family of associative memories that allows a fast pattern retrieval when implemented in hardware. In this paper, we propose a new pattern retrieval algorithm that results in a dramatically lower error rate compared to that of the conventional approach when used in oriented edge detection process. This function plays an important role in image processing. Furthermore, we present the corresponding hardware architecture and implementation of the new approach in comparison with a conventional architecture in literature, and show that the proposed architecture does not significantly affect hardware complexity.
Download manuscript.
Bibtex@inproceedings{DanJarGriCouConGro20155,
author = {Robin Danilo and Homman Jarollahi and
Vincent Gripon and Philippe Coussy and Laura
CondeCanencia and Warren J. Gross},
title = {Algorithm and Implementation of an
Associative Memory for Oriented Edge Detection Using
Improved Clustered Neural Networks},
booktitle = {Proceedings of ISCAS Conference},
year = {2015},
pages = {25012504},
month = {May},
}

Associative memories are an alternative to classical indexed memories that are capable of retrieving a message previously stored when an incomplete version of this message is presented. Recently a new model of associative memory based on binary neurons and binary links has been proposed. This model named Clustered Neural Network (CNN) offers large storage diversity (number of messages stored) and fast message retrieval when implemented in hardware. The performance of this model drops when the stored message distribution is nonuniform. In this paper, we enhance the CNN model to support nonuniform message distribution by adding features of Restricted Boltzmann Machines. In addition, we present a fully parallel hardware design of the model. The proposed implementation multiplies the performance (diversity) of Clustered Neural Networks by a factor of 3 with an increase of complexity of 40%.
Download manuscript.
Bibtex@inproceedings{DanGriCouConGro20155,
author = {Robin Danilo and Vincent Gripon and
Philippe Coussy and Laura CondeCanencia and Warren J.
Gross},
title = {Restricted Clustered Neural Network for
Storing Real Data},
booktitle = {proceedings of GLSVLSI conference},
year = {2015},
pages = {205210},
month = {May},
}

Measuring the similarity among graphs is a challenging issue in many disciplines including neuroscience. Several algorithms, mainly based on vertices or edges properties, were proposed to address this issue. Most of them ignore the physical location of the vertices, which is a crucial factor in the analysis of brain networks. Indeed, functional brain networks are usually represented as graphs composed of vertices (brain regions) connected by edges (functional connectivity). In this paper, we propose a novel algorithm to measure a similarity between graphs. The novelty of our approach is to account for vertices, edges and spatiality at the same time. The proposed algorithm is evaluated using synthetic graphs. It shows high ability to detect and measure similarity between graphs. An application to real functional brain networks is then described. The algorithm allows for quantification of the intersubjects variability during a picture naming task.
Download manuscript.
Bibtex@inproceedings{MheHasGriDufKhaBerWen20154,
author = {A. Mheich and M. Hassan and V. Gripon and
O. Dufor and M. Khalil and C. Berrou and F. Wendling},
title = {A novel algorithm for measuring graph
similarity: application to brain networks},
booktitle = {Proceedings of the IEEE EMBS Neural
Engineering Conference},
year = {2015},
pages = {10681071},
month = {April},
}

The focus of visual attention has been argued to play a key role in object recognition. Many computational models of visual attention were proposed to estimate locations of eye fixations driven by bottomup stimuli. Most of these models rely on pyramids consisting of multiple scaled versions of the visual scene. This design aims at capturing the fact that neural cells in higher visual areas tend to have larger receptive fields (RFs). On the other hand, very few models represent multiscaling resulting from the eccentricitydependent RF sizes within each visual layer, also known as the cortical magnification effect. In this paper, we demonstrate that using a corticalmagnificationlike mechanism can lead to performant alternatives to pyramidal approaches in the context of attentional modeling. Moreover, we argue that introducing such a mechanism equips the proposed model with additional properties related to overt attention and distancedependent saliency that are worth exploring.
Download manuscript.
Bibtex@inproceedings{AboGriCop20154,
author = {Ala Aboudib and Vincent Gripon and Gilles
Coppin},
title = {A Model of BottomUp Visual Attention Using
Cortical Magnification},
booktitle = {Proceedings of ICASSP},
year = {2015},
pages = {14931497},
month = {April},
}

Bearing information by a fully interconnected subgraphs is recently improved in the neural network of cliques. In this paper, a face recognition system is presented using such networks where local descriptors are used to perform feature extraction. In the wide range of possible image descriptors for face recognition, we focus specifically on the Scale Invariant Feature Transform (SIFT). In contrast to standard methods, our proposed method requires no empirically chosen threshold. Moreover, it performs matching between sets of features, in addition to individual feature matching. Thus, we favor joint occurrences of descriptors during the recognition process. We compare our approach to state of the art face recognition systems based on SIFT descriptors. The evaluation is carried out on the Olivetti and Oracle Research Laboratory (ORL) face database, whose diversity is significant for assessing face recognition methods.
Best paper award
Download manuscript.
Bibtex@inproceedings{GooPasGri20153,
author = {Ehsan Sedgh Gooya and Dominique Pastor and
Vincent Gripon},
title = {Automatic face recognition using SIFT and
networks of tagged neural cliques},
booktitle = {Proceedings of Cognitive},
year = {2015},
pages = {5761},
month = {March},
}

Associative memories, a classical model for brain longterm memory, face interferences between old and new memories. Usually, the only remedy is to enlarge the network so as to retain more memories without collisions: this is the network’s size–diversity tradeoff. We propose a novel way of representing data in these networks to provide another mean to extend diversity without resizing the network. We show from our analysis and simulations that this method is a viable alternative, which can perfectly fit cases where network’s size is constrained such as neuromorphic FPGA boards implementing associative memories.
Download manuscript.
Bibtex@inproceedings{LarGooGriPas20153,
author = {Stephen Larroque and Ehsan Sedgh Gooya and
Vincent Gripon and Dominique Pastor},
title = {Using Tags to Improve Diversity of Sparse
Associative Memories},
booktitle = {Proceedings of Cognitive},
year = {2015},
pages = {17},
month = {March},
}

Associative memories aim at matching an input noisy vector with a stored one. The matched vector satisfies a minimum distance criterion with respect to the inner metric of the device. This problem of finding nearest neighbors in terms of Euclidean or Hamming distances is a very common operation in machine learning and pattern recognition. However, the inner metrics of associative memories are often misfitted to handle practical scenarios. In this paper, we adapt Willshaw networks in order to use them for accelerating nearest neighbor search with limited impact on accuracy. We provide a theoretical analysis of our method for binary sparse vectors. We also test our method using the MNIST handwritten digits database. Both our analysis for synthetic data and experiments with realdata evidence a significant gain in complexity with negligible loss in performance compared to exhaustive search.
Download manuscript.
Bibtex@inproceedings{YuGriJiaJé20153,
author = {Chendi Yu and Vincent Gripon and Xiaoran
Jiang and Hervé Jégou},
title = {Neural Associative Memories as Accelerators
for Binary Vector Search},
booktitle = {Proceedings of Cognitive},
year = {2015},
pages = {8589},
month = {March},
}

Associative memories are devices capable of retrieving previously stored messages from parts of their content. They are used in a variety of applications including CPU caches, routers, intrusion detection systems, etc. They are also considered a good model for human memory, motivating the use of neuralbased techniques. When it comes to cognition, it is important to provide such devices with the ability to perform complex requests, such as union, intersection, difference, projection and selection. In this paper, we extend a recently introduced associative memory model to perform relational algebra operations. We introduce new algorithms and discuss their performance which provides an insight on how the brain performs some highlevel information processing tasks.
Download manuscript.
Bibtex@inproceedings{AboGriTes20153,
author = {Ala Aboudib and Vincent Gripon and
Baptiste Tessiau},
title = {Implementing RelationalAlgebraic Operators
for Improving Cognitive Abilities in Networks of
Neural Cliques},
booktitle = {Proceedings of Cognitive},
year = {2015},
pages = {3641},
month = {March},
}

In this paper, a contextdriven search engine is presented based on a new family of associative memories. It stores only the associations between items from multiple search ﬁelds in the form of binary links, and merges repeated ﬁeld items to reduce the memory requirements. It achieves 13.6× reduction in memory bits and accesses, and 8.6× reduced number of clock cycles in search operation compared to a classical ﬁeldbased search structure using contentaddressable memory. Furthermore, using parallel computational nodes in the proposed search engine, it achieves ﬁve orders of magnitude reduced number of clock cycles compared to a CPUbased counterpart running a classical search algorithm in software.
Download manuscript.
Bibtex@inproceedings{JarOniGriHanGro201410,
author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Takahiro Hanyu and Warren J.
Gross},
title = {Algorithm and Architecture for a
MultipleField ContextDriven Search Engine Using
FullyParallel Clustered Associative Memories},
booktitle = {Proceedings of SiPS},
year = {2014},
pages = {16},
month = {October},
}

Associative memories are widely used devices which can be viewed as universal errorcorrecting decoders. Employing errorcorrecting code principles in these devices has allowed to greatly enhance their performance. In this paper we reintroduce a neuralbased model using the formalism of linear algebra and extend its functionality, originally limited to erasure retrieval, to handle approximate inputs. In order to perform the retrieval, we use an iterative algorithm that provably converges. We then analyze the performance of the associative memory under the assumption of connection independence. We supportour theoretical results with numerical simulations.
Download manuscript.
Bibtex@inproceedings{GriSkaRab201410,
author = {Vincent Gripon and Vitaly Skachek and
Michael Rabbat},
title = {Sparse Binary Matrices as Efficient
Associative Memories},
booktitle = {Proceedings of the 52nd Allerton
conference},
year = {2014},
pages = {499504},
month = {October},
}

At the microscopic level, the brain is fundamentally a matter of physics and chemistry, as all the components of the universe are. At the macroscopic scale, behavior, psychology and affects are the main dimensions of its life. To convert atoms and molecules into intelligence, some kind of information has to be fixed in the grey matter of the cerebral cortex. The way this "mental information" is materialized and processed is still an enigma, probably the most puzzling problem addressed to science nowadays. At this mesoscopic level of the brain functioning, the concepts to consider are likely the same as those considered in communication and information theory, mainly information, noise, coding and modulation. This paper proposes some ideas that could help understand some features of the brain in an informationprocessing perspective.
Download manuscript.
Bibtex@inproceedings{BerDufGriJia20148,
author = {Claude Berrou and Olivier Dufor and
Vincent Gripon and Xiaoran Jiang},
title = {Information, Noise, Coding, Modulation:
What about the Brain?},
booktitle = {Proceedings of the 8th symposium on
Turbo Codes and Iterative Information Processing},
year = {2014},
pages = {167172},
month = {August},
}

A new family of sparse neural networks achieving nearly optimal performance has been recently introduced. In these networks, messages are stored as cliques in clustered graphs. In this paper, we interpret these networks using the formalism of error correcting codes. To achieve this, we introduce two original codes, the thrifty code and the clique code, that are both subfamilies of binary constant weight codes. We also provide the networks with an enhanced retrieving rule that enables a property of answer correctness and that improves performance.
Download manuscript.
Bibtex@inproceedings{BogGriSegHei20147,
author = {Bartosz Boguslawski and Vincent Gripon and
Fabrice Seguin and Frédéric Heitzmann},
title = {Huffman Coding for Storing Nonuniformly
Distributed Messages in Networks of Neural Cliques},
booktitle = {proceedings of the TwentyEighth AAAI
Conference on Artificial Intelligence, volume 1},
year = {2014},
pages = {262268},
month = {July},
}

Associative memories, serving as building blocks for a variety of algorithms, store content in such a way that it can be later retrieved by probing the memory with a small portion of it, rather than with an address as in more traditional memories. Recently, Gripon and Berrou have introduced a novel construction which builds on ideas from the theory of error correcting codes, greatly outperforming the celebrated Hopfield Neural Networks in terms of the number of stored messages per neuron and the number of stored bits per synapse. The work of Gripon and Berrou proposes two retrieval rules, SUMOFSUM and SUMOFMAX. In this paper, we implement both rules on a general purpose graphical processing unit (GPU). SUMOFSUMuses only matrixvector multiplication and is easily implemented on the GPU, whereas SUMOFMAX, which involves nonlinear operations, is much less straightforward to fulfill. However, SUMOFMAX gives significantly better retrieval error rates. We propose a hybrid scheme tailored for implementation on a GPU which achieves a 880fold speedup without sacrificing any accuracy.
Download manuscript.
Bibtex@inproceedings{YaoGriRab20147,
author = {Zhe Yao and Vincent Gripon and Michael
Rabbat},
title = {A GPUbased Associative Memory using Sparse
Neural Networks},
booktitle = {Proceedings of the PCNN14 conference},
year = {2014},
pages = {688692},
month = {July},
}

We consider associative memories based on clustered graphs that were recently introduced. These memories are almost optimal in terms of the amount of storage they require (efficiency), and allow retrieving messages with low complexity. We study an unreliable implementation of the memory and compare its error rate and storage efficiency with that of a reliable implementation. We present analytical and simulation results that indicate that the proposed memory structure can tolerate a large number of faults at a reasonable cost, thereby making it a good candidate for achieving highly efficient circuit implementations of associative memories.
Download manuscript.
Bibtex@inproceedings{LedGriRabGro20145,
author = {François LeducPrimeau and Vincent Gripon
and Michael Rabbat and Warren Gross},
title = {Clusterbased Associative Memories Built
From Unreliable Storage},
booktitle = {ICASSP},
year = {2014},
pages = {83708374},
month = {May},
}

We study properties of the family of smallworld random graphs introduced in Watts & Strogatz (1998), focusing on the spectrum of the normalized graph Laplacian. This spectrum influences the extent to which a signal supported on the vertices of the graph can be simultaneously localized on the graph and in the spectral domain (the surrogate of the frequency domain for signals supported on a graph). This characterization has implications for inferring or interpolating functions supported on such graphs when observations are only available at a subset of nodes.
Download manuscript.
Bibtex@inproceedings{RabGri20145,
author = {Michael Rabbat and Vincent Gripon},
title = {Towards a Spectral Characterization of
Signals Supported on SmallWorld Networks},
booktitle = {ICASSP},
year = {2014},
pages = {47934797},
month = {May},
}

Associative memories are data structures addressed using part of the content rather than an index. They offer good fault reliability and biological plausibility. Among different families of associative memories, sparse ones are known to offer the best efficiency (ratio of the amount of bits stored to that of bits used by the network itself). Their retrieval process performance has been shown to benefit from the use of iterations. In this paper, we introduce several algorithms to enhance the performance of the retrieval process in recently proposed sparse associative memories based on binary neural networks. We show that these algorithms provide better performance than existing techniques and discuss their biological plausibility. We also analyze the required number of iterations and derive corresponding curves.
Download manuscript.
Bibtex@inproceedings{AboGriJia20145,
author = {Ala Aboudib and Vincent Gripon and Xiaoran
Jiang},
title = {A study of retrieval algorithms of sparse
messages in networks of neural cliques},
booktitle = {Proceedings of Cognitive 2014},
year = {2014},
pages = {140146},
month = {May},
}

We study the use of sparse structured associative memories as a memoryefficient and computationallyefficient data structure for representing a set of elements when one wishes to perform setmembership queries and some errors (false positives) are tolerable. Associative memories, when viewed as representing a set, enjoy a number of interesting properties, including that set membership queries can be carried out even when the input (query element) is only partially known or is partially corrupted. The associative memories considered here (initially proposed in [Gripon and Berrou, 2011]) encode the set in the edge structure of a graph. In this paper we generalize this construction to encode the set in the edge structure of a hypergraph. We derive bounds on the false positive rates (the probability that the associative memory erroneously declares that an element is in the stored set when it, in fact, was not). Interestingly, the proposed structures enjoy many of the same properties as Bloom filters (e.g., they have zero false negative rate, the time to perform an insert and lookup does not depend on the number of elements stored, and the false positive rate can be reduced by using additional memory for storage), while also offering the properties of associative memories (allowing for queries on partial or corrupted inputs).
Download manuscript.
Bibtex@inproceedings{GriSkaRab201310,
author = {Vincent Gripon and Vitaly Skachek and
Michael G. Rabbat},
title = {Sparse Structured Associative Memories as
Efficient SetMembership Data Structures},
booktitle = {Proceedings of the 51st Allerton
conference},
year = {2013},
pages = {500505},
month = {October},
}

Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content  a sortof error/erasureresilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amount of memory required to store the same data. Finally, we bound the computational complexity for message retrieval. We then compare these bounds with two existing associative memory architectures: the celebrated Hopfield neural networks and a neural network architecture introduced more recently by Gripon and Berrou.
Download manuscript.
Bibtex@inproceedings{GriRab20139,
author = {Vincent Gripon and Michael Rabbat},
title = {Maximum Likelihood Associative Memories},
booktitle = {Proceedings of Information Theory
Workshop},
year = {2013},
pages = {15},
month = {September},
}

We introduce an extension to recently proposed associative memories that allow retrieval of previously stored messages given approximate inputs. We derive performance equations and show that, at fixed probability of error, memory efficiency is at a constant factor from optimal and complexity remain low (quadratic). These extensions arise new perspectives in terms of using associative memories in database engines to perform complex operations while keeping low response time.
Download manuscript.
Bibtex@inproceedings{GriJia20139,
author = {Vincent Gripon and Xiaoran Jiang},
title = {Mémoires associatives pour observations
floues},
booktitle = {Proceedings of XXIVth Gretsi seminar},
year = {2013},
month = {September},
}

This paper considers the problem of inferring the structure of a network from indirect observations. Each observation (a “trace”) is the unordered set of nodes which are activated along a path through the network. Since a trace does not convey information about the order of nodes within the path, there are many feasible orders for each trace observed, and thus the problem of inferring the network from traces is, in general, illposed. We propose and analyze an algorithm which inserts edges by ordering each trace into a path according to which pairs of nodes in the path cooccur most frequently in the observations. When all traces involve exactly 3 nodes, we derive necessary and sufficient conditions for the reconstruction algorithm to exactly recover the graph. Finally, for a family of random graphs, we present expressions for reconstruction error probabilities (false discoveries and missed detections).
Download manuscript.
Bibtex@inproceedings{GriRab20137,
author = {Vincent Gripon and Michael Rabbat},
title = {Reconstructing a Graph from Path Traces},
booktitle = {Proceedings of International Symposium
on Information Theory},
year = {2013},
pages = {24882492},
month = {July},
}

A lowpower ContentAddressableMemory (CAM) is introduced employing a new mechanism for associativity between the input tags and the corresponding address of the output data. The proposed architecture is based on a recently developed clusteredsparsenetwork using binaryweighted connections that onaverage will eliminate most of the parallel comparisons performed during a search. Therefore, the dynamic energy consumption of the proposed design is significantly lower compared to that of a conventional lowpower CAM design. Given an input tag, the proposed architecture computes a few possibilities for the location of the matched tag and performs the comparisons on them to locate a single valid match. A 0.13 um CMOS technology was used for simulation purposes. The energy consumption and the search delay of the proposed design are 9.5%, and 30.4% of that of the conventional NAND architecture respectively with a 3.4% higher number of transistors.
Download manuscript.
Bibtex@inproceedings{JarGriOniGro20136,
author = {Hooman Jarollahi and Vincent Gripon and
Naoya Onizawa and Warren J. Gross},
title = {A LowPower ContentAdressableMemory Based
on ClusteredSparseNetworks},
booktitle = {Proceedings of 24th International
Conference on Applicationspecific Systems,
Architectures and Processors},
year = {2013},
pages = {642653},
month = {June},
}

Associative memories retrieve stored information given partial or erroneous input patterns. Recently, a new family of associative memories based on ClusteredNeuralNetworks (CNNs) was introduced that can store many more messages than classical HopfieldNeural Networks (HNNs). In this paper, we propose hardware architectures of such memories for partial or erroneous inputs. The proposed architectures eliminate winnertakeall modules and thus reduce the hardware complexity by consuming 65% fewer FPGA lookup tables and increase the operating frequency by approximately 1.9 times compared to that of previous work.
Download manuscript.
Bibtex@inproceedings{JarOniGriGro20135,
author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Warren J. Gross},
title = {Reducedcomplexity binaryweightcoded
associative memories},
booktitle = {Proceedings of International Conference
on Acoustics, Speech, and Signal Processing},
year = {2013},
pages = {25232527},
month = {May},
}

V. Gripon, M. Rabbat, V. Skachek and W. J. Gross, "Compressing multisets using tries," in Proceedings of Information Theory Workshop, Lausanne, Switzerland, pp. 647651, September 2012.
We consider the problem of efficient and lossless representation of a multiset of m words drawn with repetition from a set of size 2^n . One expects that encoding the (unordered) multiset should lead to significant savings in rate as compared to encoding an (ordered) sequence with the same words, since information about the order of words in the sequence corresponds to a permutation. We propose and analyze a practical multiset encoder/decoder based on the trie data structure. The act of encoding requires O(m(n + log m)) operations, and decoding requires O(mn) operations. Of particular interest is the case where cardinality of the multiset scales as m = 2^n/c for some c > 1, as n → ∞. Under this scaling, and when the words in the multiset are drawn independently and uniformly, we show that the proposed encoding leads to an arbitrary improvement in rate over encoding an ordered sequence with the same words. Moreover, the expected length of the proposed codes in this setting is asymptotically within a constant factor of 5/3 of the lower bound.
Download manuscript.
Download presentation support.
Bibtex@inproceedings{GriRabSkaGro20129,
author = {Vincent Gripon and Michael Rabbat and
Vitaly Skachek and Warren J. Gross},
title = {Compressing multisets using tries},
booktitle = {Proceedings of Information Theory
Workshop},
year = {2012},
address = {Lausanne, Switzerland},
month = {September},
pages = {647651},
}

V. Gripon, V. Skachek, W. J. Gross and M. Rabbat, "Random clique codes," in Proceedings of 7" International Symposium on Turbo Codes and Iterative Information Processing, Gothenburg, Sweden, pp. 121125, August 2012.
A new family of associative memories based on sparse neural networks has been recently introduced. These memories achieve excellent performance thanks to the use of errorcorrecting coding principles. Based on these devices, we introduce a new family of codes termed clique codes. These codes are based on the cliques in balanced cpartite graphs describing associative memories. In particular, we study an ensemble of random clique codes, and prove that such ensemble contains asymptotically good codes. Furthermore, these codes can be efficiently decoded using the neural networkbased associative memories with limited complexity and memory consumption. They offer a new interesting alternative to existing codes, in particular when the underlying channel is assumed to be a memoryless erasure channel.
Download manuscript.
Download presentation support.
Bibtex@inproceedings{GriSkaGroRab20128,
author = {Vincent Gripon and Vitaly Skachek and
Warren J. Gross and Michael Rabbat},
title = {Random clique codes},
booktitle = {Proceedings of 7" International
Symposium on Turbo Codes and Iterative Information
Processing},
year = {2012},
address = {Gothenburg, Sweden},
month = {August},
pages = {121125},
}

An original architecture of oriented sparse neural networks that enables the introduction of sequentiality in associative memories is proposed in this paper. This architecture can be regarded as a generalization of a recently proposed non oriented binary network based on cliques. Using a limited neuron resource, the network is able to learn very long sequences and to retrieve them only from the knowledge of some consecutive symbols.
Download manuscript.
Bibtex@inproceedings{JiaGriBer20127,
author = {Xiaoran Jiang and Vincent Gripon and
Claude Berrou},
title = {Learning long sequences in binary neural
networks},
booktitle = {Proceedings of Cognitive 2012},
year = {2012},
address = {Nice, France},
pages = {165170},
month = {July},
}

Associative memories are alternatives to indexed memories that when implemented in hardware can benefit many applications such as data mining. The classical neural network based methodology is impractical to implement since in order to increase the size of the memory, the number of information bits stored per memory bit (efficiency) approaches zero. In addition, the length of a message to be stored and retrieved needs to be the same size as the number of nodes in the network causing the total number of messages the network is capable of storing (diversity) to be limited. Recently, a novel algorithm based on sparse clustered neural networks has been proposed that achieves nearly optimal efficiency and large diversity. In this paper, a proofofconcept hardware implementation of these networks is presented. The limitations and possible future research areas are discussed.
Download manuscript.
Download presentation support.
Bibtex@inproceedings{JarOniGriGro20125,
author = {Hooman Jarollahi and Naoya Onizawa and
Vincent Gripon and Warren J. Gross},
title = {Architecture and Implementation of an
Associative Memory Using Sparse Clustered Networks},
booktitle = {Proceedings of IEEE International
Symposium on Circuits and Systems},
year = {2012},
pages = {29012904},
month = {May},
}

A new family of sparse neural networks achieving nearly optimal performance has been recently introduced. In these networks, messages are stored as cliques in clustered graphs. In this paper, we interpret these networks using the formalism of error correcting codes. To achieve this, we introduce two original codes, the thrifty code and the clique code, that are both subfamilies of binary constant weight codes. We also provide the networks with an enhanced retrieving rule that enables a property of answer correctness and that improves performance.
I thank Pascal Vontobel for pointing out several mistakes in the article (this version is updated).
Download manuscript.
Download presentation support.
Bibtex@inproceedings{GriBer20122,
author = {Vincent Gripon and Claude Berrou},
title = {Nearlyoptimal associative memories based
on distributed constant weight codes},
booktitle = {Proceedings of Information Theory and
Applications Workshop},
year = {2012},
address = {San Diego, CA, USA},
pages = {269273},
month = {February},
}

Associative memories are devices that are able to learn messages and to retrieve them in presence of errors or erasures. Their mecanics is similar to the one of error decoders. However, the role of correlation is opposed in the two devices, used as the essence of the retrieval process in the first one and avoided in the second one. Original codes are introduced in this paper to allow the effective combination of the two domains. The main idea is to associate with each message to learn a clique in a binary neural network. The obtained performance is dramatically better than the one corresponding to the state of the art, for instance Hopfield Neural Networks. Moreover, the model proposed is biologically plausible: it uses binary sparse connections between clusters of neurons provided with only two operations: sum and selection of maximum.
Download manuscript.
Download presentation support.
Bibtex@inproceedings{GriBer20114,
author = {Vincent Gripon and Claude Berrou},
title = {A simple and efficient way to store many
messages using neural cliques},
booktitle = {Proceedings of IEEE Symposium on
Computational Intelligence, Cognitive Algorithms,
Mind, and Brain},
year = {2011},
pages = {5458},
address = {Paris, France},
month = {April},
}

C. Berrou and V. Gripon, "Coded Hopfield networks," in Proceedings of 6" International Symposium on Turbo Codes and Iterative Information Processing, Brest, France, pp. 15, September 2010.
Errorcorrecting coding is introduced in associative memories based on Hopfield networks in order to increase the learning diversity as well as the recall robustness in presence of erasures and errors. To achieve this, the graph associated with the classical Hopfield network is transformed into a bipartite graph in which incoming information is linked to orthogonal or quasiorthogonal codes. Whereas learning is similar to that of classical (i.e. Hebbian) Hopfield networks, memory retrieval relies on error correction decoding which offers strong discrimination properties between the memorized patterns.
Download manuscript.
Bibtex@inproceedings{BerGri20109,
author = {Claude Berrou and Vincent Gripon},
title = {Coded Hopfield networks},
booktitle = {Proceedings of 6" International
Symposium on Turbo Codes and Iterative Information
Processing},
year = {2010},
pages = {15},
address = {Brest, France},
month = {September},
}

We study a model of games that combines concurrency, imperfect information and stochastic aspects. Those are finite states games in which, at each round, the two players choose, simultaneously and independently, an action. Then a successor state is chosen accordingly to some fixed probability distribution depending on the previous state and on the pair of actions chosen by the players. Imperfect information is modeled as follows: both players have an equivalence relation over states and, instead of observing the exact state, they only know to which equivalence class it belongs. Therefore, if two partial plays are indistinguishable by some player, he should behave the same in both of them. We consider reachability (does the play eventually visit a final state?) and Buchi objective (does the play visit infinitely often a final state?). Our main contribution is to prove that the following problem is complete for 2ExpTime: decide whether the first player has a strategy that ensures her to almostsurely win against any possible strategy of her oponent. We also characterise those strategies needed by the first player to almostsurely win.
Download manuscript.
Bibtex@inproceedings{GriSer20097,
author = {Vincent Gripon and Olivier Serre},
title = {Qualitative Concurrent Stochastic Games
with Imperfect Information},
booktitle = {Proceedings of 36th International
Colloquium of Automata, Languages and Programming},
year = {2009},
editor = {Springer},
pages = {200211},
series = {Lecture Notes in Computer Science},
address = {Rhodes, Greece},
month = {July},
}


You are the 1357406th visitor
