Avatar

Adel Bibi

Postdoctoral Research Assistant

University of Oxford

Biography

I am a postdoctoral research assistant in computer vision and machine learning at the Torr Vision Group working with Professor Philip Torr at the University of Oxford. Prior to that, I was a PhD student at King Abdullah University of Science & Technology (KAUST) as part of the Image and Video Understanding Lab (IVUL) advised by Professor Bernard Ghanem. I have worked on a variety of problems. Problems, I personally find interesting and challenging.

Interests

  • Computer Vision
  • Machine Learning
  • Optimization

Education

  • PhD in Electrical Engineering (4.0/4.0), 2020

    King Abdullah University of Science and Technology (KAUST)

  • MSc in Electrical Engineering (4.0/4.0), 2016

    King Abdullah University of Science and Technology (KAUST)

  • BSc in Electrical Engineering (3.99/4.0), 2014

    Kuwait University

News

  • [October 15th, 2020]: I joined the Torr Vision Group working with Philip Torr at the University of Oxford.
  • [July 2nd, 2020]: Gabor layers enhance robustness paper accepted to ECCV20 arXiv.
  • [June 30th, 2020]: One paper is out on new expressions for the output moments of ReLU based networks with various new appliactions arXiv.
  • [June 24th, 2020]: New paper with SOTA results, backed with theory, on training robust models through feature clustering arXiv.
  • [March 31st, 2020]: I have sucessfully defended my PhD thesis.
  • [Dec 20th, 2019]: One paper accepted to ICLR20.
  • [Nov 11th, 2019]: One spotlight paper accepted to AAAI20.
  • [Sept 25th, 2019]: Recognized as outstanding reviewer for ICCV19. Link.
  • [August 5th, 2019]: I was invited to give a talk about the most recent research in computer vision and machine learning from the IVUL group at PRIS19, Dead Sea, Jordan. I also gave a 1 hour long workshop about deep learning and pytorch. Slides1/ Slides2/ Material.
  • [July 6th, 2019]: I was invited to give a talk at the Eastern European Conference on Computer Vision, Odessa, Ukraine. Slides.
  • [June 28th, 2019]: I gave a talk at the Biomedical Computer Vision Group directed by Prof Pablo Arbelaez, Bogota, Colombia. Slides.
  • [June 15th, 2019]: Attended CVPR19.
  • [June 9th, 2019]: Recognized as an outstanding reviewer for CVPR19. This is the second time in a row for CVPR. Check it out. :)
  • [May 26th, 2019]: A new paper is out on derivative free optimization with momentum with new rates and results on continuous controls tasks. arXiv.
  • [May 25th, 2019]: New paper! New provably tight interval bounds are derived for DNNs. This allows for very simple robust training of large DNNs. arXiv.
  • [May 11th, 2019]: How to train robust networks outperforming 2-21x fold data augmentation? New paper out on arXiv.
  • [May 6th, 2019]: Attended ICLR19 in New Orleans.
  • [Feb 4th, 2019]: New paper on derivative-free optimization with importance sampling is out! Paper is on arXiv.
  • [Dec 22nd, 2018]: One paper accepted to ICLR19, Louisiana, USA.
  • [Nov 6th, 2018]: One paper accepted to WACV19, Hawaii, USA.
  • [July 3rd, 2018]: One paper accepted to ECCV18, Munich, Germany.
  • [June 19th, 2018]: Attended CVPR18 and gave an oral talk on our most recent work on analyzing piecewise linear deep networks using Gaussian network moments. Tensorflow, Pytorch and MATLAB codes are released.
  • [June 17th, 2018]: Received a fully funded scholarship to attend the AI-DLDA 18 summer school in Udine, Italy. Unfortunately, I won’t be able to attend for time constraints. Link
  • [June 15th, 2018]: New paper out! “Improving SAGA via a Probabilistic Interpolation with Gradient Descent”.
  • [April 30th, 2018]: I’m interning for 6 months at the Intel Labs in Munich this summer with Vladlen Koltun.
  • [April 22nd, 2018]: Recognized as an outstanding reviewer for CVPR18. I’m also on the list of emergency reviewers. Check it out. :)
  • [March 6th, 2018]: One paper accepted as [Oral] in CVPR 2018.
  • [Feb 5, 2018]: Awarded the best KAUST poster prize in the Optimization and Big Data Conference.
  • [Decemmber 11, 2017]: TCSC code is on github.
  • [October 22, 2017]: Attened ICCV17, Venice, Italy.
  • [July 22, 2017]: Attened CVPR17 in Hawaii and gave an oral presentation on our work on solving the LASSO with FFTs, July 2017.
  • [July 16, 2017]: FFTLasso’s code is available online.
  • [July 9, 2017]: Attended the ICVSS17, Sicily, Italy.
  • [June 15, 2017]: Selected to attend the International Computer Vision Summer School (ICVSS17), Sicily, Italy.
  • [March 17, 2017]: 1 paper accepted to ICCV17.
  • [March 14, 2017]: Received my NanoDegree on Deep Learning from Udacity.
  • [March 3, 2017]: 1 oral paper accepted to CVPR17, Hawai, USA.
  • [October 19, 2016]: ECCV16’s code has been released on github.
  • [October 8, 2016]: Attended ECCV16, Amsterdam, Netherlands.
  • [July 11, 2016]: 1 spotlight paper accepted to ECCV16, Amsterdam, Netherlands.
  • [June 26, 2016]: Attended CVPR16, Las Vegas, USA. Two papers presented.
  • [May 13, 2016]: ICCVW15 code is now avaliable online.
  • [April 11, 2016]: Successfully defended my Master’s Thesis.
  • [March 2, 2016]: 2 papers (1 spotlight) accepted to CVPR16, Las Vegas, USA.
  • [November 20, 2015]: 1 paper acceted to ICCVW15, Santiago, Chile.
  • [June 8, 2015]: Attended CVPR15, Boston, USA.

Recent Publications

Quickly discover relevant content by filtering publications.

Gabor Layers Enhance Network Robustness

​​We revisit the benefits of merging classical vision concepts with deep learning models. In particular, we explore the effect on robustness against adversarial attacks of replacing the first layers of various deep architectures with Gabor layers, i.e. convolutional layers with filters that are based on learnable Gabor parameters. We observe that architectures enhanced with Gabor layers gain a consistent boost in robustness over regular models and preserve high generalizing test performance, even though these layers come at a negligible increase in the number of parameters. We then exploit the closed form expression of Gabor filters to derive an expression for a Lipschitz constant of such filters, and harness this theoretical result to develop a regularizer we use during training to further enhance network robustness. We conduct extensive experiments with various architectures (LeNet, AlexNet, VGG16 and WideResNet) on several datasets (MNIST, SVHN, CIFAR10 and CIFAR100) and demonstrate large empirical robustness gains. Furthermore, we experimentally show how our regularizer provides consistent robustness improvements.

ClusTR: Clustering Training for Robustness

This paper studies how encouraging semantically-aligned features during deep neural network training can increase network robustness. Recent works observed that Adversarial Training leads to robust models, whose learnt features appear to correlate with human perception. Inspired by this connection from robustness to semantics, we study the complementary connection: from semantics to robustness. To do so, we provide a tight robustness certificate for distance-based classification models (clustering-based classifiers), which we leverage to propose ClusTR (Clustering Training for Robustness), a clustering-based and adversary-free training framework to learn robust models. Interestingly, ClusTR outperforms adversarially-trained networks by up to 4% under strong PGD attacks. Moreover, it can be equipped with simple and fast adversarial training to improve the current state-of-the-art in robustness by 16%-29% on CIFAR10, SVHN, and CIFAR100.

Network Moments: Extensions and Sparse-Smooth Attacks

The impressive performance of deep neural networks (DNNs) has immensely strengthened the line of research that aims at theoretically analyzing their effectiveness. This has incited research on the reaction of DNNs to noisy input, namely developing adversarial input attacks and strategies that lead to robust DNNs to these attacks. To that end, in this paper, we derive exact analytic expressions for the first and second moments (mean and variance) of a small piecewise linear (PL) network (Affine, ReLU, Affine) subject to Gaussian input. In particular, we generalize the second-moment expression of Bibi et al. to arbitrary input Gaussian distributions, dropping the zero-mean assumption. We show that the new variance expression can be efficiently approximated leading to much tighter variance estimates as compared to the preliminary results of Bibi et al. Moreover, we experimentally show that these expressions are tight under simple linearizations of deeper PL-DNNs, where we investigate the effect of the linearization sensitivity on the accuracy of the moment estimates. Lastly, we show that the derived expressions can be used to construct sparse and smooth Gaussian adversarial attacks (targeted and non-targeted) that tend to lead to perceptually feasible input attacks.

Talks

  • Invited to give a talk at SIAM Discrete Math Conference at Georgia Institute of Technology on our work on the decision boundaries from a tropical geometric perspective

  • PRIS19, Dead Sea, Jordan. A Basket of Computer Vision Research Problems Slides

  • EECVC19, Odessa, Ukranine. Optimization Approach to a Block of Layers and Derivative Free Optimization Slides

  • CVPR18, Utah, USA. Analytic Expressions for Probabilistic Moments of PL-DNN With Gaussian Input Slides

  • CVPR17, Hawaii, USA. FFTLasso: Large-Scale LASSO in the Fourier Domain Slides

  • Optimization and Big Data Conference 2018, KAUST, Saudi Arabia. High Order Tensor Formulation for Convolutional Sparse Coding

  • ECCV16, Amsterdam, Netherlands. Target Response Adaptation for Correlation Filter Tracking Slides

Contact