I am excited about my recent work () with Hanlin and Shuichi.
In 1949, Shannon showed that almost all Boolean functions require exponential circuit complexity. However, the challenge is: how can we identify a specific Boolean function with such hardness?
Happy Birthday! Besides all the listed achievements, Yao also made TCS more popular in China by founding the Yao class at Tsinghua University; that is why I got to know the field of TCS in the first place :)
Happy birthday to Andrew Chi-Chih Yao! Yao received the 2000
#ACMTuringAward
for his fundamental contributions to the theory of computation, including the complexity-based theory of pseudorandom number generation, cryptography and communication complexity.
The problem statement is extremely simple: given n d-dimensional points, find two points with maximum Euclidean distance.
The best algorithms (3 decades old) run in n^{2-o(1)} time for d=omega(1),
Can we show there is no n^{2-eps} algo time for super-constant d, under SETH?
How fast can you derandomize a T(n)-time algorithm? In this work, Roei and I show that under some plausible conjectures, one can achieve an (n T(n))-time derandomization, and this is optimal assuming the Nondeterministic Strong Exponential-Time Hypothesis (NSETH).
With Lijie
@wjmzbmr1
Chen, Zhenjian Lu, Igor
@IgorCarboniOliv
Oliveira, and Rahul
@rahulsanthanam
Santhanam, we present a 𝐩𝐨𝐥𝐲𝐧𝐨𝐦𝐢𝐚𝐥-𝐭𝐢𝐦𝐞 pseudodeterministic algorithm for generating primes! (1/10)
It's great to end 2020 by putting up a new paper by Xin Lyu and me! :)
In this work, we proved a new "derandomized" XOR Lemma, which implies super strong correlation bounds against F_2-polynomials, as well as P^NP construction of extremely rigid matrices!
Wow, a very interesting paper about MCSP! A big open question about MCSP (Given a truth-table, find the smallest circuit computing it) is whether it's NP-complete. This work (among other things) shows that if we consider MCSP for multi-output functions, then it is NP-complete!
An exciting work by Rahul Ilango which is finally online! It shows the partial version of MCSP (given a truth-table of 0,1 and "don't care", find the minimum circuit consistent with this truth-table) is not in P under ETH, together with many other exciting results.
Finally my lovely paper with Ofer Grossman on distribute computing is out!! It took a whole year to finish this project so I am super happy right now :)
@pinkfloydie
@rrwilliams
Congratulations to my Ph.D. advisor from a proud student!!! This was the paper that I read (almost) ten years ago that got me into complexity theory, and it has continued to inspire me ever since😃
A pretty interesting work by Zhenjian, Xin, Igor, and me! It was known that PRGs imply average-case hardness under *a certain distribution*, but it's unclear whether PRGs imply average-case hardness under the *uniform distribution*. In this paper, we show they do!
This a very surprising work, showing that there is a function in AC^0 which requires nearly-cubic size De Morgan formulas to compute.
I kind of thought this can't be proved using random restrictions but they did it with random restrictions.
The upcoming FOCS 2022 will include a workshop titled New Directions in Derandomization. The workshop's format is a self-contained mini-course, focusing on the fast-paced developments from the last few years (the event will be in-person only).
In the summer there will be an online complexity seminar hosted via zoom!
The first talk (6/18 noon EST) is by Rahul Ilango about recent results on depth-d Boolean formulas obtained by a form of lifting and applications to MCSP and lower bounds.
An interesting work I did with my amazing coauthors (Badih, Ravi, and Pasin) on the (non-interactive) local and shuffle model of DP!
TL;DR: CountDisticnt is extremely hard for local and 1-message shuffle DP, but surprisingly easy for shuffle DP with 1/2+o(1) expected messages.
Rahul Ilango's year in best student papers (all solo):
- ITCS 2020
- CCC 2020
- FOCS 2020
All on the challenging MSCP. Did I mention he just finished his first-year as a grad student?
@rrwilliams
must be a proud advisor!
wow, super cool algorithm for subset sum from a high student (!) and a friend of mine.
Indeed this paper makes use of some OI (olympiad in informatics) tricks, very impressive to me :-)
@rrwilliams
@firebat03
Thanks so much, Ryan!! This work would certainly be impossible without all your invaluable encouragements and insightful discussions!! (just as my other works) I just realized how much I have grown (as a complexity student) during the last two years with you :)
@rrwilliams
This is certainly not possible at all without your fantastic guidance in the past years!! Now I got some support from IBM to work with you for two more years🥰
Check the following STOC talk by Ce Jin
@jcvbcn
on hardness magnification! We showed that for some problems one can (non-trivially) prove an n^{2-eps} lower bound while improving that to n^{2+eps} would imply super-poly lower bounds.
A very cute paper by Ruosong and I ("Classical Algorithms from Quantum and Arthur-Merlin Communication Protocols" ) 😀 (to appear in ITCS 2019), about interesting connections between quantum communication protocols and classical fine-grained counting!
Another student makes a significant contribution to streaming!
Yichuan Wang, visiting undergrad from Tsinghua, proves a lower bound on approx counting implying optimality of both Misra-Gries (aka "Frequent") from 1982, + the recent quantiles algo.
1/
Amazing and beautiful work (both conceptually and technically). They proved (among many things) that the "quantum Merkle tree" by Ramis and me indeed works, and under a condition even weaker than OWFs! (We conjectured the security in a strong model) See Fermi's thread for detail.
New paper out today with Sam Gunn, Nathan Ju (
@nathanju34
), and Mark Zhandry.
This project began with a simple question: what does it mean to commit to a quantum state? (1/16)
Very impressive! Resolved the multi-message shuffle complexity of selection/parity-learning without any restriction on the number of messages! (previously even the two-message case is open)
Diagonalization requires three quantifiers and results in an exponentially hard language in Sigma_3 EXP. In our recent work, we finally made progress by showing that Sigma_2 EXP (and indeed, S_2 EXP) already require a maximum (2^n/n) circuit complexity!
Also check out STOC talk by Hanlin Ren
@dfc7027894e168c
on new average-case circuit lower bounds! We showed non-deterministic quasi-polynomial time is strongly-average case hard for ACC^0, while previously no such result is known even for AC^0[2].
In the language of complexity theory, the goal is to identify the smallest complexity class containing a language with exponential circuit complexity. For the past four decades, the only known method to construct such a hard function has been diagonalization.
@dfc7027894e168c
There is also a student from Tsinghua (who shared the best student paper award) who couldn't go to STOC 2022 and his visa application is rejected multiple times... This is super disappointing😡
The 2018 paper is actually the first paper during my PhD. Since then I have thought about this cute problem a lot but couldn't figure out how to further bring down the dimension to any d = omega(1). I suspect some deep math results from AG codes or number theory could be useful..
A very nice paper! The reduction in this paper is super cool and kind of unexpected to me. (It builds the reduction by inspecting at the minimum circuit size modulo some prime p, wow)
@dfc7027894e168c
This looks incredible!
It seems to involve case analysis for >=100 cases (!!), I wonder if such an argument can be formulated in Coq to check automatically...😄
In particular, it claims that (Lemma 1.31) TC^0 can be simulated by depth-2 quasi-poly-size Symmetric-of-AND circuits. This would be a major breakthrough in circuit complexity if true. I tried to read the proof but it is a bit hard to verify... I wonder how other people think...
I was working on this model during my summer internship at Google, and was caught by this amazing problem but couldn't figure out how to deal with an unbounded number of messages. Very happy to see this being resolved!
Topics include non-black-box derand. techniques, which replace classical PRGs; superfast algorithms and free lunch theorems, for algorithms and for proof systems; new connections between derand. and other areas in TCS; and the range avoidance problem for explicit constructions.
Looking forward to read this new paper by Mingda Qiao and Greg Valiant: This blows my mind. No distributional assumption made, yet non-trivial prediction possible.
(I *would* link that to my
#fintech
friends, but don't have any).
Under SETH, it's straightforward to show there is no n^{2-eps} algo when d = O(log n). In 2017, Ryan Williams improved the dimension to d = O(polyloglog n), and I further improved the dimension to d = 2^{O(log* n)} in 2018. But the progress has then stopped :(
@cheraghchi
@algo_class
I thought one reason may be that Karp reduction is important for the NP vs coNP distinction (SAT and UNSAT are equivalent under Cook reduction). I am not sure though... Karp reductions are also required for classes like NP cap coNP.
@cstheory
a super interesting problem I had worked on (like 2 years ago); this application of polynomial-method is something I have never seen before, a truly amazing work!
First paper as a graduate student! Thanks so much for my omnipotent advisor!!! :-) One cute result is that finding l_2-farthest pair in 2^logstar n dimensions need essentially n^2 time
There are also two other great talks on the same topic in the workshop (), by Nicollas Sdroievski and Yanyi Liu. And of course many more other fantastic talks by many great speakers! See for all the videos.
@ilyaraz2
looks like many ppl think it is correct (), as this is from two leading experts in that direction. But I guess there is no serious verification yet as it only appeared a week ago?
The following two questions regarding derandomization have attracted some recent interest: 1. what is the fastest derandomization? 2. what is the minimum assumption for derandomization? The aforementioned talks explored these two questions for both algorithms and proof systems.
@BooleanAnalysis
@rrwilliams
Chen-Tell also showed assuming similar assumptions, one can derand BPT[T] in TIME[T n^eps] "on average" instead of worst-case. In that sense "in the practice" one may still derand BPT[T] with almost no overhead...
@BooleanAnalysis
@rrwilliams
Based on Wil16 one can also show based on "
#SAT
cannot be solved by co-NP type 2^{0.99n} circuits", this n factor blow-up of BPT[T] in TIME[T n] is also optimal, for every T.😄
Perhaps this means one can try to refute NSETH by finding faster derandomization!
@_shmcg
@BooleanAnalysis
Sipser's paper is indeed very ahead of its time! In fact, this result can be strengthened to give quadratic overhead running time derandomization, by plugging in a modern extractor that is much stronger than what is known to Sipser in the 80s.
Complementary to the recent result by Cheu-Ullman,
(they proved tight bounds for selection in the robust shuffle model with unbounded messages) We also obtain some better sample lower bounds when the number of messages is not so big and without robustness.
The proof of the above result builds on the n/log n lower bound for CountDisticnt in the property testing model (estimating the unseen), generalizing their method to bound the TV distance between mixtures of multi-dimensional Poisson distributions using moment-matching.
In my recent survey of quantum computing applications, I asked whether the Kerenidis-Prakash q. algorithm for recommendation systems is exponentially faster than the best classical algorithm. I didn't expect the answer (no) to come from an 18 year old.
@BooleanAnalysis
@rrwilliams
In CT21, the assumption is still against normal computation:
(1)TIME[2^{kn}] is not in TIME[2^{0.99kn}]/2^{0.99n}) plus (2)OWFs.
It's interesting to note that (1) is indeed implied of any PRG-based derand of BPT[T] in TIME[T n]. So (1) is kind of <=> to fast derand (modulo OWF).
@pravesh
@jfitzsimons
@rrwilliams
Sorry that I forgot, my bad. Ryan's paper is here , and it also contains lots of discussions on previous work (like the best algorithms). My subsequent work is here