I am a postdoctoral research assistant in computer vision and machine learning at the Torr Vision Group working with Professor Philip Torr at the University of Oxford. Prior to that, I received my MSc and PhD degrees from King Abdullah University of Science & Technology (KAUST) where I was part of the Image and Video Understanding Lab (IVUL) advised by Professor Bernard Ghanem. I have worked on a variety of problems. Problems, I personally find interesting and challenging.
PhD in Electrical Engineering (4.0/4.0), 2020
King Abdullah University of Science and Technology (KAUST)
MSc in Electrical Engineering (4.0/4.0), 2016
King Abdullah University of Science and Technology (KAUST)
BSc in Electrical Engineering (3.99/4.0), 2014
Kuwait University
We revisit the benefits of merging classical vision concepts with deep learning models. In particular, we explore the effect on robustness against adversarial attacks of replacing the first layers of various deep architectures with Gabor layers, i.e. convolutional layers with filters that are based on learnable Gabor parameters. We observe that architectures enhanced with Gabor layers gain a consistent boost in robustness over regular models and preserve high generalizing test performance, even though these layers come at a negligible increase in the number of parameters. We then exploit the closed form expression of Gabor filters to derive an expression for a Lipschitz constant of such filters, and harness this theoretical result to develop a regularizer we use during training to further enhance network robustness. We conduct extensive experiments with various architectures (LeNet, AlexNet, VGG16 and WideResNet) on several datasets (MNIST, SVHN, CIFAR10 and CIFAR100) and demonstrate large empirical robustness gains. Furthermore, we experimentally show how our regularizer provides consistent robustness improvements.
This paper studies how encouraging semantically-aligned features during deep neural network training can increase network robustness. Recent works observed that Adversarial Training leads to robust models, whose learnt features appear to correlate with human perception. Inspired by this connection from robustness to semantics, we study the complementary connection: from semantics to robustness. To do so, we provide a tight robustness certificate for distance-based classification models (clustering-based classifiers), which we leverage to propose ClusTR (Clustering Training for Robustness), a clustering-based and adversary-free training framework to learn robust models. Interestingly, ClusTR outperforms adversarially-trained networks by up to 4% under strong PGD attacks. Moreover, it can be equipped with simple and fast adversarial training to improve the current state-of-the-art in robustness by 16%-29% on CIFAR10, SVHN, and CIFAR100.
The impressive performance of deep neural networks (DNNs) has immensely strengthened the line of research that aims at theoretically analyzing their effectiveness. This has incited research on the reaction of DNNs to noisy input, namely developing adversarial input attacks and strategies that lead to robust DNNs to these attacks. To that end, in this paper, we derive exact analytic expressions for the first and second moments (mean and variance) of a small piecewise linear (PL) network (Affine, ReLU, Affine) subject to Gaussian input. In particular, we generalize the second-moment expression of Bibi et al. to arbitrary input Gaussian distributions, dropping the zero-mean assumption. We show that the new variance expression can be efficiently approximated leading to much tighter variance estimates as compared to the preliminary results of Bibi et al. Moreover, we experimentally show that these expressions are tight under simple linearizations of deeper PL-DNNs, where we investigate the effect of the linearization sensitivity on the accuracy of the moment estimates. Lastly, we show that the derived expressions can be used to construct sparse and smooth Gaussian adversarial attacks (targeted and non-targeted) that tend to lead to perceptually feasible input attacks.
Invited to give a talk at SIAM Discrete Math Conference at Georgia Institute of Technology on our work on the decision boundaries from a tropical geometric perspective
PRIS19, Dead Sea, Jordan. A Basket of Computer Vision Research Problems Slides
EECVC19, Odessa, Ukranine. Optimization Approach to a Block of Layers and Derivative Free Optimization Slides
CVPR18, Utah, USA. Analytic Expressions for Probabilistic Moments of PL-DNN With Gaussian Input Slides
CVPR17, Hawaii, USA. FFTLasso: Large-Scale LASSO in the Fourier Domain Slides
Optimization and Big Data Conference 2018, KAUST, Saudi Arabia. High Order Tensor Formulation for Convolutional Sparse Coding
ECCV16, Amsterdam, Netherlands. Target Response Adaptation for Correlation Filter Tracking Slides