research
publications by categories in reversed chronological order.
2025
- PreprintLow Rank Neural Networks are enough the MLP Neural Tangent KernelJanis Aiad (Heran), Haizhao Yang, and Shijun Zhang2025
We develop a Neural Tangent Kernel (NTK) theory for low-rank and random feature networks (RF-LR) that provides an approximation principled and computationally efficient route to the kernel regime. Assuming fixed weights as classical NTK theory predicts, we prove that low-rank weight matrices do not loses expressivity. RF-LR preserves the same reproducing kernel Hilbert space (RKHS) as the shallow ReLU kernel; As a toy example, we prove rank-driven concentration: the empirical three-layer NTK concentrates around its deterministic limit, combining angular and radial decoupling of gaussian processes outputs, we show mean NTK has the same Laplacian RKHS as the shallow ReLU kernel. At the spectral level for random matrices, under some assumptions to be relaxed, the NTK Gram matrix exhibits a spiked and shifted Marchenko–Pastur similar to the 2-layer case : the bulk location is shifted out of 0 thanks to low-rank bottlenecks. Finally, we give an explicit NTK recursion and closed-form depth expansion, clarifying how correlation kernels from random features couple across layers. Taken together, these results establish that RF-LR preserves expressivity and gives optimization guarantees from smallest eigenvalues while lowering the entry cost to the kernel regime from O(N^2) to O(rN), and they yield a tractable framework for finite-width corrections and spectral predictions at moderate model sizes.
@preprint{aiad2024lowrank, title = {Low Rank Neural Networks are enough the MLP Neural Tangent Kernel}, author = {Aiad (Heran), Janis and Yang, Haizhao and Zhang, Shijun}, year = {2025}, booktitle = {Preprint in preparation for conferences available at http://github.com/janisaiad/MMNN/tree/main/refs/paper/tex/templateArxiv.pdf}, url = {http://github.com/janisaiad/MMNN/tree/main/refs/paper/tex/templateArxiv.pdf}, }
2024
- EURO 2024Solving an MBDA’s use case related to optimal assignment on current IBM Quantum ComputersEdouard Debry, Davide Boschetto, Janis Aiad (Heran), and 2 more authorsIn proceedings of EURO 2024 - 33rd European Conference on Operational Research, Copenhagen, Denmark, Jul 2024
In this communication, we aim to present the solving of an MBDA’s use case related to optimal assignment, onto IBM online QPUs. The Quantum Approximate Optimization Algorithm (QAOA) (Farhi et al. 2014) is the base of our Variational Quantum Algorithm developed. We compare two methods to account for constraints, first primarily by integrating them into the Cost Hamiltonian with Lagrangian multipliers and second, by adapting the Mixer Hamiltonian according to (Wang et al. 2022) and (Fuchs et al. 2022). For the former, determining the optimal Lagrangian multipliers is generally a challenging task and the integration of constraints into the Cost Hamiltonian can significantly increase the associated circuit depth. The latter method aims to reduce the overall Hilbert space to only feasible solutions, which lets get rid of Lagrangian multipliers but may significantly enlarge the circuit associated to the Mixer Hamiltonian and make the initial state harder. It is then interesting to compare the circuit depth of both methods with respect to how well they are able to statistically put forward optimal solutions against non-optimal and non-feasible ones, still for relatively small sized instances, to fit on current QPUs.
@inproceedings{debry2024mbda, title = {Solving an MBDA's use case related to optimal assignment on current IBM Quantum Computers}, author = {Debry, Edouard and Boschetto, Davide and Aiad (Heran), Janis and Roux, Rachel and Kotenkoff, Alexandre}, booktitle = {proceedings of EURO 2024 - 33rd European Conference on Operational Research}, year = {2024}, month = jul, location = {Copenhagen, Denmark}, url = {https://www.euro-online.org/conferences/program/#abstract/4152}, }