I’m hiring a postdoc to work with me on exciting projects in generative modelling (AI) and/or uncertainty quantification. You'll be part of a great team, embedded in
@AmlabUva
and the UvA-Bosch Delta Lab.
Apply here:
RT appreciated!
#ML
#GenAI
🧑🎓 PhD opportunity in Machine Learning
I am recruiting a PhD student to work with me at
@AmlabUva
in Amsterdam on topics such as approximate statistical inference, causal inference, generalization, etc.
Deadline April 10
We have published a tutorial on importance sampling and sequential Monte Carlo methods and their application in ML and statistics. Highlights include learning proposals and target distributions, unbiasedness results for normalization constants and more.
We combine variational and Monte Carlo methods for approximate Bayesian inference. VSMC is a new variational family with theoretical guarantees. We provide a new VI interpretation of the IWAE and other MC objectives!
@blei_lab
@scott_linderman
#aistats2018
Inspired by the new chapter on sequential Monte Carlo (SMC) methods in
@sirbayes
updated (excellent) probabilistic machine learning book I finally got around to uploading the latest version of our SMC tutorial "Elements of Sequential Monte Carlo"
BREAKING NEWS
The Royal Swedish Academy of Sciences has decided to award the 2024
#NobelPrize
in Physics to John J. Hopfield and Geoffrey E. Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks.”
Our tutorial on importance sampling and sequential Monte Carlo methods is now published in Foundations and Trends in ML. Should be available at NeurIPS. Highlights include learning proposals, target distributions, unbiasedness results and more.
Thank you!! I am truly honored ☺️ Many thanks also to all my collaborators and my supervisors (Fredrik Lindsten and Thomas Schön) for making my PhD studies a wonderful experience.
Congratulations to
@chris_naesseth
(Theory & Methods) and
@AleAviP
(Applied Methods), winners of the Savage Award for best doctoral dissertations in Bayesian econometrics and statistics! They presented their work at a special session at virtual
#JSM2020
👇
@AmstatNews
1/n
I am currently looking for a postdoc to join our team at
@AmlabUva
,
@UvA_IvI
and the UvA-Bosch Delta Lab () to work on e.g. generative AI, AI4Science, or uncertainty quantification.
Vacancy:
RT appreciated!
#MachineLearning
#AI
Twisted Diffusion Sampling - Practical and Asymptotically Exact Conditional Sampling for Diffusion Models
Theoretically grounded conditional sampling for diffusion models with applications to inpainting, Bayesian inverse problems, protein design, ...
Proceedings of the 6th Symposium on Advances in Approximate Bayesian Inference, held in Vienna and co-located with
#ICML2024
, is now available at
#AABI
#Bayes
Time to start thinking about that AISTATS submission! :)
Abstract deadline: Thu Oct 6 AoE
Paper deadline: Thu Oct 13 AoE
Conference will be hybrid, in-person location in the US.
The AISTATS 2023 Call for Papers is out!
Submission Deadline: Thu Oct 13, AOE
Program Chairs: Jennifer Dy, Jan-Willem van de Meent (
@jwvdm
)
General Chair: Francisco Ruiz
Workflow Chairs: Christian Naesseth (
@chris_naesseth
), Davin Hill (
@davinhill_
)
Markovian Score Climbing: Variational Inference with KL(P||Q)
Presenting my paper today at
#NeurIPS2020
on how to design an algorithm that provably minimizes the inclusive KL divergence, KL(P||Q), wrt Q for use in e.g. approximate Bayesian inference.
The 5th Symposium on
Advances in Approximate Bayesian Inference welcomes your accepted ML journal & conf papers via fast-track! Come present your work at the poster session.
OpenReview:
CfP:
#MachineLearning
#Bayes
#ICML2023
#AABI
We present Transport Score Climbing for approximate inference. Connecting adaptive transport maps, HMC, and variational inference with forward KL, achieves a powerful new tool for statistical inference.
Adji does amazing work in ML! Really enjoyed reading about her work on CHI-VI () and Reweighted EM () among others. Looking forward to learning more about prescribed GANs to combat mode collapse ()!
📢 Presenting Neural Diffusion Models (NDM) — a generalization of conventional diffusion models that enables defining and learning time-dependent non-linear transformations of data in a simulation-free setting.
📜:
🧵1/7
Deadline approaching (~2 weeks) for my PhD student position at AMLab in Amsterdam. DMs also open for anyone that would like to know more or have any questions.
Intended starting date is after summer, around September.
RTs appreciated! :)
🧑🎓 PhD opportunity in Machine Learning
I am recruiting a PhD student to work with me at
@AmlabUva
in Amsterdam on topics such as approximate statistical inference, causal inference, generalization, etc.
Deadline April 10
📢 Presenting Neural Diffusion Models (NDM) — a generalization of conventional diffusion models that enables defining and learning time-dependent non-linear transformations of data in a simulation-free setting.
📜:
🧵1/7
Looking forward to attend and give an invited talk on our recent diffusion model work at the Generative Models and Uncertainty Quantification workshop in Copenhagen next week!
Really a great line-up of speakers.
#GenAI
#UQ
#ML
#Diffusions
#Flows
When to use it? Always! 😊 If you want to know more about SMC, and especially its use in ML, I will shamelessly plug my tutorial:
Elements of Sequential Monte Carlo
(Nicolas and Omiros book is great too! )
This weeks episode is all about Sequential Monte Carlo (SMC) and when to use it. But what is it actually?🤔
Let Nicolas Chopin explain the two meanings of SMC - No.1 is here:
Im very excited to announce that everyone's favourite Bayesian symposium is back for 2023!🚀🚀
The 5th Symposium on Advances in Approximate Bayesian Inference (AABI) will take place in 🏖️Honolulu Hawaii🌴, Sunday July 23rd, Co-Located with ICML!
Website:
With the
#ICML2024
decisions now out, if you are interested in approximate (Bayesian) inference consider presenting your ICML paper as a poster by submitting it to the AABI fast track. AABI 2024 is co-located with ICML and takes place on the expo day.
@foxmaiden
@alisonkgerber
You get to keep approximately $740 out of the $900. But the sales tax/VAT in Sweden is also significantly higher (25% in general, 12% for groceries).
Looking forward to a full week with
@aabi_org
and
@icmlconf
starting this Sunday!
🌟I am co-organising AABI on July 21
✨Presenting a paper (Neural Diffusion Models), and several workshop contributions during
#ICML2024
.
Reach out if you want to chat!
Tamara Broderick speaking on robustness at
#AISTATS2023
Last day of a great conference with lots of great talks, posters, discussions, panels etc.
@aistats_conf
Another incredible piece of work from
@FEijkelboom
and
@GrigoryBartosh
, together with our collaborators
@wellingmax
and
@jwvdm
.
We map flow matching onto a variational problem, and leverage this to sample discrete variables (in this case, graphs). See thread and paper!
#ML
#GenAI
Flow Matching goes Variational! 🐳
In recent work, we derive a formulation of flow matching as variational inference, obtaining regular FM as a special case.
Joint work with dream team
@GrigoryBartosh
,
@chris_naesseth
,
@wellingmax
, and
@jwvdm
.
📜
🧵1/11
@DavidSKrueger
I can't answer the original question as I do regularly submit papers to these conferences (and occasionally get them published too😉). However, I would certainly add AISTATS and UAI to this list.
[Paper] Neural diffusion models that...
Invert "noise" applied to syntax trees
Iteratively edit code while preserving syntactic validity, making it easy to combine the model w/search
Learn to convert images into programs that produce those images
@YulingYao
It was not clear to me what your definition of overfitting was in this case. The discussion on alpha seems to indicate that your definition is any updated distribution that takes into account the likelihood/data at all.
Have you ever wondered why papers from top universities/research labs often appear in the top few positions in the daily email and web announcements from arXiv?
Why is that the case? Why should I care?
Thanks everyone for checking out the poster!
@GrigoryBartosh
did a great job managing the crowd and explaining.
We have some very exciting follow-up work. Check out the thread about Neural Flow Diffusion Models
#DiffusionModels
#Flow
#SDE
#ML
🔥 Excited to share our new work on Neural Flow Diffusion Models — a general, end-to-end, simulation-free framework that works with an arbitrary noising processes and even enables learning them!
📜:
🧵 1/11
@avt_im
All salaries paid in EUR, GBP, JPY, SEK etc. have certainly lost a lot versus salaries paid in USD the last 10-15 years. Just this last year we are talking about 25-30% devaluation vs USD. Even if GBP was stronger vs USD, £30k is certainly too low for postdoc.
Where I did my PhD the defence usually takes about 2-3 hours. The decision is taken by a committee of 3 tenured academics (1-2 from the same uni, rest external). Another external academic is invited to be the opponent. The opponent presents the defendant's work for 45 min. (1/2)
Thesis defenses vary a lot. In the US, I've only seen, give a 1 hr talk then the committee asks a couple softball questions.
At Waterloo, it's a 30 min talk, but multiple rounds of questioning from all 5 committee members (up to 2 hours total).
What's it like at your school?
Editing the automatically generated captions for NeurIPS 🤔
KL -> kale
gradients -> ingredients
kernel -> colonel
To be honest it might just be more fun to leave these in 🤣
@predict_addict
@YulingYao
Not the question I asked. I am well-aware of the shortcomings of Bayesian methods that you are pointing out here. Again, I have no horse in this race other than trying to understand what the definition of overfit was.
One positive thing with a virtual NeurIPS is that I can more easily jump between all the interesting workshops. Still too many to be able to attend them all though😢
#NeurIPS2020
This is some really exciting work by Grigory that not only bridges the gap between flow matching and score matching, but also allows us to learn the noising process to significantly improve density estimation and generation.
Check out the thread and paper for more!
🔥 Excited to share our new work on Neural Flow Diffusion Models — a general, end-to-end, simulation-free framework that works with an arbitrary noising processes and even enables learning them!
📜:
🧵 1/11
Big boost for AI and machine learning research in Sweden. The Knut and Alice Wallenberg Foundation grants 1 billion SEK (approx. $120 million USD) to extend the WASP program to AI
#AI
#MachineLearning
#Sweden
This is literally one of the first things you should do when doing a literature study! Take the keywords of your title and input in favorite search engine.
A paper about “natural gradients” and “variational inference” in the title should not miss our work. Just google it and you will find them. We have written 7 papers after SVI (which they cite). This work is actually using a trick that we wrote as a *workshop* paper in 2016. 2/7
@predict_addict
@YulingYao
From my understanding the argument in the blog post was that Bayesian methods overfit even in the case of the model being true. Just trying to get an understanding of the definition of overfit used, not make an argument for or against Bayesian methods.
Through Monte Carlo-based gradients we can also go beyond the standard reverse KL divergence, leading to approximations with more favourable properties. I will talk about so called score climbing algorithms that enable optimisation of the forward KL divergence:
With the
#ICML2024
decisions now out, if you are interested in approximate (Bayesian) inference consider presenting your ICML paper as a poster by submitting it to the AABI fast track. AABI 2024 is co-located with ICML and takes place on the expo day.
Introducing Gemma: a family of lightweight, state-of-the-art open models for developers and researchers to build with AI. 🌐
We’re also releasing tools to support innovation and collaboration - as well as to guide responsible use.
Get started now. →
@sam_power_825
@adjiboussodieng
@alek_thiery
In my mind you can use VAE to do both VI and VEM. The focus is to leverage amortization to learn a posterior approximation q(z|x). Whether you use that approx. to learn model parameters or use it for inference on new data depends on the application.
I will be discussing the interplay between Monte Carlo methods and variational methods for approximate inference, as well as our recent work on variational inference using forward KL
I agree wholeheartedly with the basic premise of this blog post. If an algorithm doesn't work for a simple model that we understand, why would we expect it to work for a complex model which we don't fully understand?
If you've published with TMLR on approximate inference in 2024, submit your work to the AABI Fast-Track to be able to share and discuss your work at the symposium co-located with ICML in July!
See link in our CfP:
Definitely worth testing. The problems with funding processes with very low acceptance rates is similar to the problems with peer-review for venues with very low acceptance rates.