The Machine Learning & Inference Research team I co-lead
@Netflix
@NetflixResearch
is hiring PhD interns for Summer 2024. Looking for a research internship (tackling industry problems while also focusing publishable research!)? Apply thru this listing:
The Product ML Research team I co-lead
@netflix
@NetflixResearch
is hiring!
Want to do ML research that drives basic science+pubs *and* business impact? Are you a deep thinker *and* a builder? Join us!
Still in PhD? Intern with us!
I am boycotting
@informs2022
. We cannot ask our woman colleagues to travel to a state where their health is put at danger and their choices limited about their own bodies. I am calling on
@INFORMS
to move venues from Indiana or change to virtual-only in light of the new law.
My research group (located at Cornell Tech campus in NYC) is looking to recruit a postdoc to work on topics related to causal inference, fairness in ML, and sequential decision making (bandits+RL). Positions are renewable (1-2 years).
Please retweet to spread the word. 🙏
Excited to be co-organizing the NeurIPS 2021 Workshop on Causal Inference Challenges in Sequential Decision Making happening Dec 14 online. Please consider submitting contributions. CfP on website. Due date 9/30.
Offline
#ReinforcementLearning
converges faster than you think! Offline RL is about learning new dynamic decision policies from existing data -- crucial in high-stakes domains like medicine. Theory predicts its regret converges as 1/√n. But when we run a sim we see 1/n 🤔🧵
My favorite part is finally here: the panel discussion!! With our awesome lineup of speakers, moderated by
@david_sontag
. At the
#NeurIPS2019
causal ML workshop “Do the Right Thing”
.
@alexdamour
tells us about deconfounding scores, which generalize propensity and prognostic scores and help with covariate reduction, at the
#NeurIPS2019
causal ML workshop “Do the Right Thing”
Very excited to share new work with
@angelamczhou
on partial identification in off-policy evaluation (OPE) in infinite-horizon RL when there are unobserved confounders: . OPE is crucial for RL applications where exploration is limited, like medicine.👩⚕️ 1/n
Personalized interventions using heterogeneous causal effects are the next big thing. But are they fair? Impossible to say: standard disparity measures are unidentifiable! In
@angelamczhou
& I give ways to credibly assess fairness despite this
#NeurIPS2019
We cannot fix what we cannot measure! Thank you
@NSF
for funding my FAI proposal on *credible* fairness assessments and robustly fair algorithms:
Proud+excited to be working with the amazing people at on this project.
Very excited to be involved in four papers being presented at
@icmlconf
#ICML2020
this week.
A short thread spotlighting the papers with just *one* sentence each:
Excited to post new paper with the amazing
@Jacobb_Douglas
& Kevin Guo
"Doubly-Valid/Doubly-Sharp Sensitivity Analysis for Causal Inference with Unmeasured Confounding"
In this time of uncertainty it's good to have some checks on your causal inferences 1/n
Had a blast presenting at the Online Causal Inference Seminar yesterday together with
@XiaojieMao
. You can watch the recorded presentation here:
And a big thank you to Alex Belloni for a fantastic discussion!
Posted a big update to Localized Debiased ML
in advance of talk next week at Online Causal Inference Seminar
New: more special cases incl IV-LQTE, empirics, code, more discussion/exposition, + more
#EconTwitter
#causality
@VC31415
Andrea Rotnitzky on a general recipe for choosing the *optimal* minimal adjustment sets for causal inference given a particular causal DAG at the
#NeurIPS2019
causal ML workshop “Do the Right Thing”
Heard that RL doubly robust off-policy evaluation is data efficient, right? Not quite true in fact if we're dealing with a Markov decision process, as we often are in RL. In we provide the first efficient estimator, Double Reinforcement Learning.
If you're at
#informs2019
and want to hear about Double
#ReinforcementLearning
(, ) please come to our session at 1:30pm in room 221 -- session will be all RL+Causal+Bandits! 🤓
Next up in the
#NeurIPS2019
causal inference session series, Andrew will be presenting our work on off-policy evaluation with latent confounders, where optimal balancing saves the day via duality! (5:30pm East Exhibition Hall B+C
#137
)
Congrats
@hamsabastani
, Kimon, Vishal, and coauthors for this incredible achievement, and thank you for showing the way on using AI/ML to inform intelligent COVID policy without stupid travel bans.
Just published
@Nature
May represent the most important successful application of
#AI
in the pandemic (only a few are on the list) Reinforcement learning for efficient testing at the Greece border
Thanks for the incredible 99 submissions to
@NeurIPSConf
{Causal}∩{ML} workshop "Do the Right Thing." Notifications will be out any minute. Congrats to the accepted contributions. Looking forward to seeing all of you in Vancouver!
#NeurIPS2019
Some can easily afford not to come, but for job candidates it's detrimental to their career. Consider a pregnant woman on the job market, even with a known and desired pregnancy and even a currently-seemingly healthy one. Should we put her in this position at all?
@Adam235711
This 100%! I never heard of "OR" until Martin Wainwright suggested I apply to MIT ORC (never even realized Berkeley had IEOR). Only applied to math PhDs otherwise, with slim/no chance at academic job just a foregone conclusion. Visited MIT ORC and realized that's where I belong.
*Today* at 11:30am Eastern in Online Causal Inference Seminar
Differencing (DID) identifies effects in TWFE → analogous xforms identify effects in factor models!
& don't need T→∞ as in synth ctrl and matrix factorization
@XiaojieMao
@EconometricaEd
To bring in the new year
@XiaojieMao
+Masa+I just posted a paper on Localized Debiased ML for estimating causal quantities using ML methods when hi-dim nuisances depend on estimand In this thread I'll explain why this prob is so important and what we did 1/
Had a great time as discussant for the peerless Guido Imbens and his fantastic talk at the Online Causal Inference Seminar
#OCIS
Rewatch it here:
Talked about the larger context for data combination for identification, efficiency, & partial identification
Yep! Important distinction. Never liked the term ITE (Sorry,
@ShalitUri
@frejohk
@david_sontag
. Otherwise ❤️ TARNet+relatives)
But I do think if we read the I as "Individual-level" it does capture something that, tho equivalent to CATE, is important conceptually when data is rich
I have been reading several papers recently where the term "individualized treatment effect" is wrongly defined by E[Y(1)-Y(0)| C=ci] and ci is a set of characteristics associated with individual i. See
.
Warning: This is still population-based 1/2
IVs are often viable identification alternative to assuming all confounders observed. But existing tools can have hard time handling complex data/relationships. Turns out using deep adversarial training to solve continuum GMM works pretty well
#NeurIPS2019
I'll be talking about Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning today, 12pm EST (Sess. 6)!
Poster:
Gather:
Paper:
Hello world. Been lurking for a bit, and in meantime learned about some exciting new papers and listened in on thought-provoking conversations. Time to participate too! To start, in my next few tweets I am going to tell you about some new papers I'm excited to be involved in.
Wrote a post for Mgmt Sci blog with
@XiaojieMao
&
@angelamczhou
for our featured article "Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination"
If You Can't Measure It, Bound It: Credibly Auditing Algorithms for Fairness
(Re health danger: I know there are "exceptions" in law but anything that makes drs hesitate for purely political/punitive and non-medical reasons or opt to xfer a patient rather than save their lives asap is putting women's health at risk. & that's even putting choice aside 🤬)
At
#NeurIPS2019
and want to learn about using adversarial training to solve conditional moment problems (eg, instrumental variables) with neural networks? Then come see Andrew present our work on DeepGMM () today at 10:45AM at East Exhibition Hall B+C
#183
.
Posted a big update to Localized Debiased ML
in advance of talk next week at Online Causal Inference Seminar
New: more special cases incl IV-LQTE, empirics, code, more discussion/exposition, + more
#EconTwitter
#causality
@VC31415
To bring in the new year
@XiaojieMao
+Masa+I just posted a paper on Localized Debiased ML for estimating causal quantities using ML methods when hi-dim nuisances depend on estimand In this thread I'll explain why this prob is so important and what we did 1/
Masa and I just posted a new paper on *efficient* off-policy policy gradients: . We establish a lower bound on how well one can estimate policy gradients and develop an algo that achieves this bound & exhibits 3-way double robustness. ☘️ 1/n
In
@angelamczhou
@XiaojieMao
& I tackle the question of how to credibly assess disparate impacts on classes when membership is unobserved -- an urgent question in both fair lending and healthcare equity reform. 1/3
Next up at the
#NeurIPS2019
fairness cluster, let us tell you about the xAUC fairness metric for bipartite ranking and how to use it to assess disparities in predictive risk scores. Poster
#118
at 5pm today. W/
@angelamczhou
.
Curse of horizon in
#ReinforcementLearning
is longer & longer trajectories from different policies look less alike. This is fatal to RL in unsimulatable/unexplorable settings like medicine. In we use special RL structure to efficiently break the curse 🤖👩⚕️
What are the disparate impacts of personalizing to maximize conditional treatment effects? Unknowable. Let us tell you how to *credibly* assess disparities of personalized interventions at our poster
#72
at 10:45am today at
#NeurIPS2019
. W/
@angelamczhou
.
@SusanMurphylab1
&
@Susan_Athey
: even if we shouldn’t really care about classical confidence intervals, the decision makers at funding agencies, world bank, etc do care right now and so that’s where we have to start. Maybe we can change that in the future...
There may be very different objectives in “fair pricing” depending on the context. We
@angelamczhou
try to categorize and reconcile them and their (in)compatibility in this new paper.
(To appear at
@FAccTConference
)
@MuhammedKambal
@katforrester
This paper by
@nathankallus
@angelamczhou
is nice in noting how quite different fairness concerns pop up in different algorithmic applications, perhaps encouraging some humility about applying one abstract idea across the board
FYI we updated our Double Reinforcement Learning draft -- we got some questions about asymptotic variance vs finite samples so we added new finite-sample guarantees where leading term is controlled by the efficient variance. Thanks for the questions! 🙏
A thoroughly enjoyable interview with
@red_abebe
. What a journey!! For those curious about the work she discusses, she gave an excellent talk (recorded) about it at
@red_abebe
please please bring us some real Ethiopian coffee to the next in-person conf 🙏
From “The Joy of x,” a Quanta podcast hosted by
@StevenStrogatz
: Rediet Abebe’s journey from Ethiopia to Harvard, and more importantly, the inequity she encountered in America, helped inspire her to design algorithms that optimize resource allocation.
Doubly robust off-policy eval is asymp locally efficient. Self-normalized importance sampling is stable in finite samples. 🤔Which to choose? Get best of both worlds (even if misspecified) thanks to a (normalized) empirical likelihood approach
#NeurIPS2019
So I've tried and given up on setting up an application system.
To apply:
- Send me an email with subject “[PostDoc App] <Applicant Name>” with cover letter and CV.
- Ask two recommenders to send me an email with subject “[PostDoc Rec] <Applicant Name>” with their rec letter.
My research group (located at Cornell Tech campus in NYC) is looking to recruit a postdoc to work on topics related to causal inference, fairness in ML, and sequential decision making (bandits+RL). Positions are renewable (1-2 years).
Please retweet to spread the word. 🙏
Due to recent disruptions & challenges arising in light of the evolving COVID-19 pandemic, we have decided to postpone our PAPER SUBMISSION DEADLINE to Friday Janurary 21, 2022! 🗓️📢✍️ Updates to our website () will be made shortly.
Contextual bandits w/ linear rewards don't need exploration b/c can extrapolate ∞ly. For nondifferentiable rewards run separate non-contextual algos b/c so little info. We give optimal algo for all smoothness levels in b/w using both context & exploration
@david_sontag
“where have observational studies and methods had a big impact in medicine?”
Andrea: “HIV/AIDS treatment!! When-to-start policies revolutionized by such work.”
Reference: see
@_MiguelHernan
’s work
Optimizing efficient policy value estimates does *not* imply efficient learning of policy parameters! In a new paper () we consider what would actually be efficient for the common reduction of policy learning to weighted (cost-sensitive) classification 1/n
Today at the
#NeurIPS2019
#ReinforcementLearning
session, let us tell you about combining the benefits of self-normalization and double robustness in off-policy evaluation (no; DR w/ normed weights is not enough). (10:45 AM @ East Exhibition Hall B+C
#209
)
In 2 hours Andrew will be giving our
@aistats_conf
oral presentation on Off-Policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders.
Come hear him present:
@johncarlosbaez
The phenomenon is more subtle in general stochastic optimization:
You have K stoch opt problems with separate datasets; what do you do? Minimize each sample avg approx? Or pool your data? Depends! But you can still mimic oracle shrinkage amount (even if 0)
@deaneckles
@alex_peys
If you're trying to estimate the value of a treatment regime, Radon-Nykodim says as long as it is abs cts wrt logging policy then you have an importance weight and DR is exactly as usual. Otherwise (e.g., you want E[Y(f(X))] but logging is cts dist) the estimand is not regular 1/
Thanks for having me! Had a blast talking to you guys about fairness in A/B tests.
& repeating
@SergeBelongie
(apxly): "come to
@AiCentreDK
for the best combo of top-notch AI and top-notch quality of life" 😍
We are not a political organization. But we cannot operate an honest academic operation under a dictatorship. And we cannot continue to operate in the US with our international students constantly under threat. Get out and vote.
The next seminar will be on Thursday, November 19th, 2020 at 10:00 AM ET / 3:00 PM London / 11:00 PM Beijing. Nathan Kallus from Cornell University will give a talk on “Statistically Efficient Offline Reinforcement Learning”. Chengchun Shi from LSE will lead the discussion.
.
@scottniekum
teaching a robot to play tennis with *safe* inverse reinforcement learning at
#NeurIPS2019
workshop on Safety and Robustness in Decision Making
"Why would we expect social causal dynamics in a counterfactual world to be the same as those in our world? If they aren’t the same, why do we care about these quantities at all?"
NEW from
@uhlily
, on DAGs, causality, and race in social science research.
@netflix
@NetflixResearch
I should add that the Research Scientist role is flexible regarding level depending on the candidate (what we call in academia "open rank")
Applicants should submit a cover letter, CV, and 2 letters of recommendation. Applications will be considered on a rolling basis and sending your materials early is encouraged. Applicants are also encouraged to email/DM me to notify me of intent to submit a complete application.
DeepMatch: Balancing Deep Covariate Representations for Causal Inference Using Adversarial Training
Balancing methods for causal inference are great but rely on a known representation (linear or kernel); we extend to neural nets using adversarial training
Assessing fairness of predictive risk scores requires us to think beyond binary classification. In we consider (un)fairness in bipartite ranking, where a natural metric, xAUC, arises for diagnosing disparities in risk scores algo
@angelamczhou
#NeurIPS2019
We know we can correct noisy state obs using latent var models. But still exists no outcome-model-independent unbiased importance weights for off-policy eval as in noiseless case. Surprise: balanced policy eval works and beats outcome modeling
#NeurIPS2019
Contributor or not, come participate in an exciting dialogue between {Causal}∩{ML} researchers from all fields. We have an amazing line up of invited talks by
@Susan_Athey
,
@SusanMurphylab1
, Andrea Rotnitzky,
@sshortreed
, & Ying-Qi Zhao. 🤩🤩 Happening December 14
@NeurIPSConf
.
@thorstenjoachim
asks whether conventional learning-to-rank methods are fair at the
#NeurIPS2019
workshop on Safety and Robustness in Decision Making
@EpiEllie
You (&
@Susan_Athey
@edwardhkennedy
) may also be interested in w/
@XiaojieMao
where we study the value of *combining* surrogate-outcome data into causal analysis without the strong assumptions usually needed to ensure surrogates can *replace* real outcomes
Looking to host a postdoc interested in working within the disjunction of the pairwise conjunctions of causal inference using machine learning (or vice versa), reinforcement learning (esp. offline), algorithmic fairness, contextual bandits, optimization under uncertainty, or with
.
@SusanMurphylab1
on the practical challenges of designing and implementing reinforcement learning algorithms in mobile health at the
#NeurIPS2019
causal ML workshop “Do the Right Thing”
Interested in reinforcement learning *without* interaction with the environment or simulator? We're organizing a
@NeurIPSConf
2020 workshop on Offline RL. Visit the homepage for more details including Call for Papers!
@deaneckles
@alex_peys
and then there's no single right way and no sqrt-n consistent regular estimator. We investigate 9 different ways to kernelize DR and analyze behavior under just rate conditions for nuisances in .
Tomorrow, Friday---giving a spotlight talk on fairness considerations for covariate-personalized pricing at Fair AI in Finance, 18:55 – 19:10 EST
Video:
Workshop:
To help better understand the theoretical foundations of batch offline RL, , S. Meyn & I are organizing a Simons workshop this week with wonderful speakers!
Schedule:
Webinar link:
Yingqi Zhao on estimation and inference on *high-dimensional* individualized treatment regimes and semiparametric approaches thereto at the
#NeurIPS2019
causal ML workshop “Do the Right Thing”
Dhanya Sridhar giving us a glimpse of her work with
@yixinwang_
and
@blei_lab
on using counterfactual predictions to operationalize equality of opportunity and affirmative action in ML at the
#NeurIPS2019
causal ML workshop
@jondr44
There's still bias, albeit 1/n, and you can't do exact inference. The magic of RCTs is unbiased, airtight causal inference. Shameless plug: you get the same efficiency w/o bias and w/ rand inf by balancing *before* randomization &