cv

Contact

Name Janis AIAD
Title Master's student in Mathematics, Statistics and Machine Learning
Email jan** [dot] aiad [dot] polytechnique [dot] org
Github https://github.com/janisaiad

National Award

Publications

  • 2024.06.01
    Solving MBDA's Optimal Assignment on IBM Quantum Computers
    EURO 2024 Conference
    Accepted at EURO 2024 Conference for oral spotlight in the Hybrid Classical-Quantum Algorithms session. Theoretical derivation of a new constraint handling methods in QAOA for combinatorial assignment problems and benchmarked on IBM QPUs.

Preprints

  • 2026.09.01
    Tensor Programs for Low Rank Neural Networks and Physics-Informed Losses
    Preprint in preparation
    In progress with Pr Shijun Zhang - Building a new framework for NTK calculations for low-rank neural networks, frozen weights and Sobolev losses, this unifies NTK theory for every practical use case in scientific ML.
  • 2026.09.01
    ResNets Neural ODE, Hamiltonian neural architecture search and minimas flatness
    Preprint in preparation
    Working with Pr. Tan Bui Thanh - Research group on loss surface geometry and training dynamics. Stealth research on optimization and neural architecture search.
  • 2026.05.01
    Low Rank Neural Networks are enough the MLP Neural Tangent Kernel
    Preprint in preparation
    We develop a Neural Tangent Kernel (NTK) theory for low-rank and random feature networks (RF-LR) that provides an approximation principled and computationally efficient route to the kernel regime. Assuming fixed weights as classical NTK theory predicts, we prove that low-rank weight matrices do not loses expressivity. RF-LR preserves the same reproducing kernel Hilbert space (RKHS) as the shallow ReLU kernel. At the spectral level for random matrices, the NTK Gram matrix exhibits a spiked and shifted Marchenko-Pastur similar to the 2-layer case. Finally, we give an explicit NTK recursion and closed-form depth expansion, establishing that RF-LR preserves expressivity and gives optimization guarantees while lowering the entry cost to the kernel regime from O(N²) to O(rN).
  • 2026.05.01
    Fractional Sobolev losses for NNs optimization dynamics preconditioning
    Preprint in preparation
    In progress with Pr. Haizhao Yang - Theoretical bounds for Scientific Machine Learning optimization problems, we prove a compromise between data and function-space regularity to precondition the optimization problem.

Education

  • 2025.01 - 2026.12

    Paris, France

    Master's degree
    École Normale Supérieure Paris-Saclay
    MVA: Mathematics of Learning - Masters level program in Mathematics, Statistics, Learning theory
    • Optimal Transport Theory (Villani), Random Matrix Theory in Statistics/ML, Topological Data Analysis, Deep Learning Theory, Heavy tails statistics theory, Discrete differential geometry, Geometric Deep Learning, Large Scale non-convex Optimization, 3D Computer Vision, ML for Time Series
  • 2022.01 - 2025.12

    Paris, France

    Bachelor and Master's degree in Mathematics
    École Polytechnique, IP Paris
    French top program for research in science and engineering | Bachelor level program in mathematics
    • Functional Analysis, Topological Data Analysis and Category Theory, Numerical and convergence analysis, Queuing Theory and randomized algorithms, Differential geometry and tensor calculus, Stochastic Processes and Malliavin calculus, Symplectic geometry, Data driven physics, Variational PDEs and transport
  • 2020.01 - 2022.12

    Lyon, France

    Classes Préparatoires
    Lycée du Parc
    Preparatory program in Pure Mathematics, Physics and Computer Science track, summa cum laude
    • Multivariate calculus, Lie theory, Commutative Algebra, Complex Analysis, Probability Theory and Random Graphs, Classical Mechanics, Electromagnetism, Thermodynamics, Quantum physics, Statistical physics, Complexity and Graph theory, Algorithms and Data Structures, Competitive programming, Combinatorics, Set theory and Logic

Github Repositories

  • 2024.01 - 2026.03
    Hierarchical Causal Models
    Developed and maintain a Python library for hierarchical causality on large scale time series analysis, currently in progress of refactoring to a C++ backend. In preparation for JMLR Open source ML track in Mar 2026.
  • 2023.03 - 2026.03
    tinySCIML - Operator Learning Library
    Developed and maintain a tiny tensorflow SCIML library for DeepONet and FNO architectures for operator learning and PDE solving in production-oriented workflows. In preparation for PR to DeepXDE in Mar 2026.
  • 2024.06 - 2025.08
    DeNN-NTK: NTK finite width analysis
    Establishing experimental and theoretical foundations for large scale NTK finite width corrections, with applications to regression analysis and scientific machine learning applications.
  • 2025.01 - 2026.09
    MMNN: Tensor Programs for Low Rank Random Features Neural Networks and Physics-Informed Losses
    In progress with Pr Shijun Zhang - Building a new framework for NTK calculations for low-rank neural networks, frozen weights and Sobolev losses, this unifies NTK theory for every practical use case in scientific ML.
  • 2023.09 - 2024.05
    FPGA HDR: Hardware-Accelerated High Dynamic Range Imaging
    New Verilog Hardware Language (VHDL) library for Ultra low latency High Dynamic Range (HDR) imaging on bare metal FPGA Intel Cyclone IV, handwritten from scratch. In preparation to OpenCores.
  • 2024.01 - 2024.08
    HFT QR RL: High Frequency Trading with Queue Reactive models
    Under the supervision of Prof. Charles-Albert Lehalle, Queue Reactive Models (https://arxiv.org/pdf/1312.0563), proved statistical tests of exogenous price moves in market microstructure using 1 year nanosecond ticked data from major Nasdaq stocks (Google, Apple, etc., 10^9 entries). Blog post available at http://www.cmap.polytechnique.fr/~charles-albert.lehalle/projects/2024QR/ prior to journal publication.
  • 2022.01 - 2023.12
    OCaml Competitive Programming
    Collection of competitive programming solutions and algorithms implemented in OCaml, focusing on functional programming approaches to algorithmic problem solving.

References

Prof. Haizhao Yang (Stanford math PhD)
University of Maryland Mathematics Department - Research supervisor for NTK theory and operator learning.
Prof. Emilie Devijver
AI-vidence - Research supervisor for causality and time series analysis.
Prof. Davide Boschetto
MBDA - Research supervisor for quantum computing applications.
Prof. Marianne Clausel
AI-vidence - Research supervisor for causality and time series analysis.
Prof. Charles-Albert Lehalle
Research supervisor in high frequency trading data mining

Work

  • 2025.04 - 2025.08
    Visiting Research Internship
    University of Maryland Mathematics Department
    Under the supervision of Prof. Haizhao Yang, manuscript in preparation
  • 2024.06 - 2024.10
    Student Researcher
    AI-vidence
    XAI research startup - Student researcher under the supervision of Prof Emilie Devijver
  • 2023.05 - 2024.05
    Student Researcher
    MBDA
    Student researcher under the supervision of Prof Davide Boschetto
  • 2022.08 - 2023.04
    Leadership Training
    Marseille Firefighting Brigade
    Leadership training with the Marseille Firefighting Brigade, participating in emergency operations and operations research.