My main talks on Graph Neural Networks in 2020
1. Introduction to GNNs
2. Recent developments in GNNs
3. Benchmarking GNNs
Hope they can be useful.
Happy new year to everyone !
Sharing my lecture slides on "Recent Developments of Graph Network Architectures" from my deep learning course. It is a review of some exciting works on GNNs published in 2019-2020.
#feelthelearn
Sharing my lecture slides on Attention Nets/Transformers with two simple codes for (1) Language Modeling and (2) Sequence-To-Sequence Modeling to understand Transformers from scratch.
Slides :
Codes :
Happy to deliver a remote lecture on "Graph Convolutional Networks" tomorrow for the NYU Deep Learning course of
@ylecun
and
@alfcnz
. Slides and video will be made available.
Our paper "Benchmarking Graph Neural Networks" has been accepted for publication at Journal of Machine Learning Research
@JmlrOrg
!
(after rejection from NeurIPS, ICLR and ICML :)
My 10 Favorite Algorithms
• K-means and spectral clustering
• FFT
• Random forest and gradient boosting
• Personalized PageRank
• ADMM and primal-dual optimization
• EVD and SVD
• Backpropagation and SGD
• Convnet and transformer
• Reinforce algorithm
• Diffusion model
Notebooks for Lecture 5 on Recommendation on Graphs
Lab1: Google PageRank
Lab2: Collaborative/low-rank recom
Lab3: Content/graph-Dirichlet recom
Lab4: Hybrid recom
Let's get started!😀
Lecture 1 offers a high-level introduction to Graph Machine Learning.
Additionally, check out the top-tier books by
@mmbronstein
,
@TacoCohen
,
@PetarV_93
, and outstanding computing libraries by
@DGLGraph
,
@PyG_Team
to learn GML.
My new favorite easy-to-use interactive 3D visualization library is Plotly!
Here is a visualization of the CIFAR dataset where images are first projected into 2,048-dimensional hidden vectors using InceptionV3 and then reduced to 3D using UMAP.
Sharing my lecture slides on Generative Models with VAE and GAN.
I present the models from scratch and provide simple codes to understand the core ideas.
Slides :
Codes :
I will be sharing soon my course material on Graph Machine Learning from last year.
Initially, I planned to wait for a 2nd iteration of the course for polishing and improving, but considering I may not teach it again, I have decided to share the first version :)
Instructions for running the course notebooks with GitHub, Google Colab or local installation:
Course's repo :
It is remarkable how much easier it has become to run DL codes compared to when I began teaching it back in 2014!
New paper on benchmarking graph neural networks w/
@vijaypradwi
@chaitjo
T. Laurent and Y. Bengio
Our goal was to identify trends and good building blocks for GNNs.
Ever wanted to code Graph Transformer (GT) from *scratch* using a few lines of code with
@PyTorch
and
@DGLGraph
? :)
See below my course material
Slides:
GitHub:
Paper:
Coding GT step-by-step 👇
An amazing finding in 2019 was the double-descent of M Belkin.
This is not only a deep theoretical result in machine learning- it has also a great practical interest => Make your net large, learn long enough and that's it! Bye bye early stopping :)
Video:
My lecture slides on "Attention Neural Networks". I introduce the popular families of attention nets with MemoryNets, Transformers (seq2seq and language modeling) and BERT:
I've also upgraded the demo of A. Rush to PyTorch 1.1:
If new with graph learning, here is a warm-up notebook to learn to use graphs:
• Build graph w/ features and compute basic message-passing function w/
@DGLGraph
• Convert into graph formats w/ DGL, NetworkX, dense/sparse PyTorch
• Visualize graph
Code:
Slides of my talk "The Transformer Network for the Traveling Salesman Problem" for
@ipam_ucla
workshop "Deep Learning and Combinatorial Optimization".
We have improved recent learned heuristics for TSP50 w/ optimality gap of 0.004% and 0.39% for TSP100.
I've moved to the School of Computing at the National University of Singapore.
@NUSingapore
@NUSComputing
Contact me if you are interested in a PhD/Postdoc in GraphNNs+X
My lecture slides on "Deep Reinforcement Learning". I explain step-by-step the popular RL algorithms DQN, REINFORCE, QAC, AAC:
And theory is supported by PyTorch implementations :)
Graph Machine Learning course
Lecture 4 presents Graph SVM 🌟
The celebrated Support Vector Machine (SVM) coupled with kernel method and graph Dirichlet regularization is one of the best classification models (in the absence of feature learning :)
I joined the AI lab
@SeaGroup
as Head of Graph Machine Learning.
I will explore theory and applications of GNNs.
If you want to work on this exciting topic, there are opportunities for full-time RS/RE, industrial PhDs (w/ NUS), interns, etc. Pls, contact me at bressonx
@sea
.com
Notebooks for Lecture 2 on Graph Science
Lab1: Generate LFR social networks
Lab2: Visualize spectrum of point cloud & grid
Lab3/4: Graph construction for two-moon & text documents
I recently spoke with a journalist about applications of Graph Neural Networks.
I made a few slides to facilitate the discussion.
I share them here -- hopefully they can be useful :)
Scikit-learn is one of the best machine learning tools! I teach this library to my undergraduate students.
Great news -- the developers offer a free course (in English) from May 18 to July 14. Register here:
Tout savoir sur
@scikit_learn
, le 3e logiciel libre de
#machinelearning
le plus utilisé au monde 🚀 Du 18 mai au 14 juillet ses créateurs proposent leur MOOC en anglais et gratuit, pour apprendre à construire des modèles prédictifs !
📍 Inscriptions
Presenting new work on GNNs/(Graph)Transformers:
"Graph Neural Networks with Learnable Structural and Positional Representations"
with Anh Tuan Luu, Thomas Laurent, Yoshua Bengio and
@xbresson
.
Paper:
Code:
#2minutebrief
👇
I will give a series of lectures on Graph Neural Networks this week for the 5th Int'l Summer School on Data Science (SSDS 2020). The summer school was scheduled to be in the beautiful city of Split, Croatia, but given the situation it will be virtual.
The LLM industry relies on Transformers w/ O(n^2) complexity. GNNs are O(E)~O(n) for sparse graphs.
Industry application: Google Maps ETA developed by
@PetarV_93
et-al
Lecture 3 reviews Graph Clustering 🌟
Clustering is a cornerstone topic that beautifully connnects combinatorial optimization, continuous optimization, graph theory and spectral theory.
Positional encodings are essential for two reasons:
1. They guarantee Transformers/graphNNs to be universal approximators for functions invariant by index permutation. Most real-world graphs have natural symmetries, like the line graph for Transformers.
There is the feeling that positional encodings are far more important than you would guess at first, and that they are slowly making convolutions irrelevant.
Stéphane Mallat is a prodigious researcher & teacher.
He has a new course on math for high-dim data (in French ;)
His previous courses
2021:
2020:
2019:
2018:
Course material to learn GIN, a MP-GNN as powerful as the WL graph isomorphism test.
Slides:
Code:
Paper:
A student asked me why GIN cannot classify CSL graphs?
Actually it can be shown that convolution and transformer/attention are (almost) equivalent for graphs.
Architectures like Transformers and ConvNets are slowly but happily converging.
I am still puzzled why some people are so alarmist about our limited understanding of AI?
Along this line, they should also be alarmist about planes as our air turbulence understanding is quite limited!
I was asked about self-attention and cross-attention.
See slides 53-58 that intuitively describe SA & CA (w/ adaptative context, reception field, hierarchy) and why it is essential to use multiple layers for deep representation and multi-step reasoning.
Terence Tao discusses "AI and Mathematics" at IMO 2024:
He provided clear examples of the deep connections between mathematics and computers throughout history.
He also expressed the view that "AI is disruptive, but there is also a sense of continuity".
New paper w/
@he_xiaoxin
@BryanHooi1
T Laurent
@AdamPerold
@ylecun
We introduce Graph MLP-Mixer, a GNN w/ three key properties:
1) captures long-range dependency
2) keeps low linear speed/memory complexity as MP-GNNs
3) provides high expressivity
New paper "Feature Collapse" w/ Thomas Laurent James von Brecht
ArXiv:
GitHub:
Feature learning is a critical mechanism in deep learning, enabling generalization. But a comprehensive understanding of this mechanism remains elusive.
I am speaking tomorrow at the "AI for Science" workshop at the NRF. It will be engaging! Local folks are welcome to join us at CREATE Tower Level 2, Singapore 138602
ICML Reviewer2
"The paper is well-structured and easy to follow" ⇒ Presentation: 1 poor
".. is a significant contribution" ⇒ Contribution: 2 fair
"Due to my limited time to review, I did not catch up with the most novel point in the paper" ⇒ Rating: 1 Very Strong Reject
😞
Trying my best efforts to convince my committee to teach Deep Learning and Graph Machine Learning but without any success so far..
What is the purpose of having any research expertise if you cannot share and teach it?
Any recommendation?
New paper on benchmarking graph neural networks w/
@vijaypradwi
@chaitjo
T. Laurent and Y. Bengio
Our goal was to identify trends and good building blocks for GNNs.
End of the workshop "Artificial Intelligence and Discrete Optimization"
Thanks again to
@ipam_ucla
, the speakers and the participants for an exceptional workshop!
All talks will be available at
Thrilled to give two talks at ACM
#KDD2021
.
1. Deep Learning on Graphs (DGL-KDD’21) Talk on Aug 14th 7pm PST
2. Applied Data Science (ADS) Talk on Aug 15th 10:45pm PST
@kdd_news
Gil Strang is one of the smartest and kindest individuals I have met.
During my postdoc at UCLA, Gil made me grasped the beauty of graphs.
He's always been supportive to young people, at each career step.
His signature: "Best wishes and send any papers I should know about!" 😁
I recommend to watch the talk of
@lipmanya
about equivariant GraphNNs. Wonderful talk, technical and accessible (w/ nice illustrative figures to explain the math concepts).
Wow - geometric deep learning is going mainstream!
Great job on the introductory video
@sirajraval
!
In addition, a perfect documentary video of researchers doing geometric deep learning :)
Deep Learning generally works well on Euclidean data, but it turns out that graphs & 3D objects are not in that category. Geometric Deep Learning offers us a powerful solution to this problem!
Working on the icml rebuttal and still surprised by the reviewers' critics with the "not new", "too simple", "not theoretical", "not SOTA".
Let me use the example of Transformer (TR).
Excited to present deep learning for genome assembly !
A new project at the intersection of genomics, graph neural networks and combinatorial optimization.
Data, code, arxiv available below.
My recent talk on this topic
After many months, I'm proud to share our work on untangling genome assembly graphs with GNNs. Or, GNNome Assembly (sorry not sorry)🧬
Paper:
Code/Data:
with
@xbresson
, T. Laurent,
@Martin_fschmitz
, and
@msikic
.
Read more in 🧵
Reviewer: Compare with techniques x and y, datasets z and w
Us: We did and got significant better results
R: Whatever, I think the novelty is low
Takeaway: When you want to reject a paper and lack good reasons, use novelty.
Novelty cannot be rebutted by experiments!
Causality For Machine Learning by Bernhard Schölkopf
"The article argues that the hard open problems of machine learning and AI are intrinsically related to causality, and explains how the field is beginning to understand them."
A review paper with 136 references.
Thrilled to give two talks at ACM
#KDD2021
.
1. Deep Learning on Graphs (DGL-KDD’21) Talk on Aug 14th 7pm PST
2. Applied Data Science (ADS) Talk on Aug 15th 10:45pm PST
@kdd_news
.
@JeffDean
presenting "Exciting Trends in Machine Learning" at
@NUSingapore
's Department of Computer Science.
Jeff also generously spent time after his talk engaging with (very happy) students 😀
On my way to
#ICLR2023
in Rwanda!
Thomas Laurent, James von Brecht and I will present "Long-Tailed Learning Requires Feature Learning"
We introduce a theoretical study that demonstrates the importance of feature learning in achieving effective generalization.
I was blown away deep learning can understand physics and chemistry differently from standard theories like PDEs, differential geometry etc
As a physicist trained w/ these giant theories, I found it incredible how
@ylecun
, Y. Bengio,
@geoffreyhinton
have revolutionized science!
Two exciting updates for the Learning on Graphs Conference!
1) Virtual conference from Nov 26th 2024
2) Join the first in-person conference from Sept 28-30 2025
Don't miss out on the opportunity to receive high-quality reviews!
LoG Conference 2024 is back !!!👉 We are looking for more reviewers! We have a special emphasis on review quality via monetary rewards, a more focused conference topic, and low reviewer load (max 3 papers). But for this we need your help! Sign up here: !
Our two recent submissions:
1. Theoretical paper is not good because it does not provide real-world SOTA and does not improve existing algorithms.
2. New algorithm paper with real-world SOTA is not good because it is not theoretical and not novel enough.
🙃
My favorite textbooks on convex optimization:
- Convex Optimization, S.Boyd, L.Vandenberghe
Video lectures:
- Introductory Lectures on Convex Optimization, Y.Nesterov
Video lectures:
Great talk by
@ofirnachum
on casting the RL Bellman dynamic programming problem as an unconstrained Fenchel-Rockafeller primal-dual convex optimization problem (easier to solve).
Huge congrats to DGL team on this milestone release!
The (sparse) matrix formulation of GNNs is quite exciting (more intuitive than MP)
@DGLGraph
@wenmingye
@smolix
And big thanks for implementing our graph Transformers
@vijaypradwi
w/ 10 lines of code😁
DGL 1.0 has arrived! Huge milestone of the past 3+ years of development.
👉Check out the blog for the release summary and the highlight of the brand new DGL-Sparse package
#DGL
#GML
Indeed, graphs are a scam -- an invention from mathematicians to control people!
I can prove it -- it is well-known that data is truly i.i.d. (written in all machine learning textbooks).
So there exists no relationship between data and graph representation is just an illusion.
I taught my students Deep Graph Library (DGL) in my lecture on "Graph Neural Networks" today. It is a great resource to develop GNNs with
@PyTorch
. Kudos to the team
@GraphDeep
!
As a physicist, I was trained to simplify models to unveil their underlying first principles.
Unfortunately, simplicity is the enemy of reviewer
#2
in ML conferences.
"Obscure technique and theorem are the touchstone in accepting new ML papers."
I'd love
@NeurIPSConf
@icmlconf
to adopt
@openreviewnet
bc it'd provide a great discussion forum for authors, reviewers and the AI *community*.
I'd feel mentally better if I knew my answers could be seen/discussed by all researchers interested in my project.