Expressivity, sparsity and identifiability in deep neural networks
https://cap2021.sciencesconf.org/resource/page/id/6
Expressivity, sparsity and identifiability in deep neural networks
https://cap2021.sciencesconf.org/resource/page/id/6
Our paper on “Sparsity-based audio declipping methods: selected overview, new algorithms, and large-scale evaluation” is accepted for publication in the IEEE/ACM Transactions on Audio, Speech and Language Processing.
Here are some audio examples.
The SPADE Toolbox used to reproduce these results is available under the BSD-3-Clause License (https://opensource.org/licenses/BSD-3-Clause).
We also publicly release the code of A-SPADE (Analysis Sparse Audio DEclipper).
L’optimisation convexe permet d’estimer quotidiennement l’évolution du taux de reproduction de la Covid-19, au cours du temps et à travers les départements.
Détails dans l’actualité scientifique intitulée « Comment mieux estimer l’évolution du taux de reproduction de la Covid-19 ? » mise en ligne sur le site web de l’INP.
#EnDirectDesLabos ?| Une équipe de chercheurs interdisciplinaire propose un outil mathématique et numérique efficace pour une estimation quotidienne de l’évolution du taux de reproduction de la #COVID19 @lpensl
— Physique au CNRS (@INP_CNRS) November 6, 2020
➡️https://t.co/ACec9fnbkB pic.twitter.com/DxevrNwq1g
Dear colleagues
it is our pleasure to announce the upcoming one month program on Mathematics for Signals, Images and Structured Data
to be held at CIRM ( Marseille), January 25 -February 26, 2021. The program will consist in a research school on
• Mathematics, signal processing and learning (25 – 29 January 2021) and four conferences on
The DANTE team at ENS de Lyon, France is seeking highly qualified candidates for a postdoctoral position on the algorithmic and mathematical foundations of resource-efficient machine learning, in the context of the ACADEMICS project (Machine Learning & Data Science for Complex and Dynamical Models) funded by the IDEXLyon.
Sample research topics include (download pdf version of the offer here): Expressivity and Robustness of Sparse Deep Networks; Provable Algorithms for Sparse Deep Learning; Random Sketches for Efficient Manifold & Graph-based Learning.
Starting date and duration: spring 2020, one year – renewable
Location : http://www.ens-lyon.fr/en/
Scientific Contact: Rémi Gribonval (firstname.lastname@inria.fr)
To apply: Applicants are requested to send a detailed CV, a list of publications and a brief statement of research interests. This material, together with two letters of reference, shall be sent to Rémi Gribonval