Denis Yarats Profile Banner
Denis Yarats Profile
Denis Yarats

@denisyarats

6,804
Followers
634
Following
94
Media
873
Statuses

Cofounder & CTO @perplexity_ai

United States
Joined July 2015
Don't wanna be here? Send us removal request.
Pinned Tweet
@denisyarats
Denis Yarats
3 months
exciting to announce our new funding and launch of enterprise pro
@perplexity_ai
Perplexity
3 months
We’re excited to announce that we’ve raised $62.7 million in a Series B1 funding led by Daniel Gross. The round also includes Stanley Druckenmiller, NVIDIA, Jeff Bezos, Tobi Lutke, Garry Tan, Andrej Karpathy, Dylan Field, Elad Gil, Nat Friedman, IVP, NEA, Jakob Uszkoreit, Naval
100
247
2K
2
5
64
@denisyarats
Denis Yarats
3 years
Thrilled to announce the Unsupervised Reinforcement Learning (URL) workshop at ICML 2021, where we bring together researchers to discuss challenges and possible solutions applying un(self-)supervised learning techniques to enhance RL agents. Website: 1/
Tweet media one
2
81
459
@denisyarats
Denis Yarats
4 years
Exciting to announce our new work together with @ikostrikov and @rob_fergus : Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels. Paper: Code: Website: [1/N]
6
97
383
@denisyarats
Denis Yarats
3 years
Excited to release DrQ-v2! DrQ-v2 is more sample efficient, runs 3.5X faster than DrQ, and is the first model-free agent that solves humanoid from pixels. co-authors: @rob_fergus , @alelazaric , @LerrelPinto tech report: code: 1/N
9
45
248
@denisyarats
Denis Yarats
3 years
Currently It is challenging to measure progress in Unsupervised RL w/o having common tasks & protocol. To take a step in addressing this issue we release our #NeurIPS2021 paper: (URLB) Unsupervised RL Benchmark! Paper: Code: 1/N
Tweet media one
2
55
230
@denisyarats
Denis Yarats
6 months
Extremely excited to announce our Series B! We really appreciate the trust of our investors and users in giving us an opportunity to advance our mission of building the best answer engine!
@perplexity_ai
Perplexity
6 months
We are happy to announce that we've raised $73.6 million in Series B funding led by IVP with participation from NVIDIA, NEA, Bessemer, Elad Gil, Jeff Bezos, Nat Friedman, Databricks, Tobi Lutke, Guillermo Rauch, Naval Ravikant, Balaji Srinivasan.
145
314
3K
16
2
163
@denisyarats
Denis Yarats
2 years
Currently, Offline RL data is collected under the same reward that is used for evaluation, not ideal... @brandfonbrener and I propose an alternative approach – ExORL, that uses Unsupervised RL & relabeling to construct datasets for Offline RL. paper: 1/10
Tweet media one
4
31
155
@denisyarats
Denis Yarats
3 years
Happy to share my new work -- Proto-RL, a task-agnostic pre-training scheme that reconciles exploration and representation learning in image-based RL! with: @rob_fergus , Alessandro Lazaric, and @LerrelPinto . paper: code: [1/N]
Tweet media one
3
36
149
@denisyarats
Denis Yarats
5 years
Excited to share our new work lead by @rjerryma , where we attempt to demystify RAdam (Liu et al. 2019) and its automatic learning rate warmup schedule. Paper: [1/4]
Tweet media one
4
52
146
@denisyarats
Denis Yarats
7 months
it is incredibly humbling to be mentioned alongside such great companies! a testament to hard work and dedication of the @perplexity_ai team
Tweet media one
5
17
137
@denisyarats
Denis Yarats
1 year
At @perplexity_ai , we've been working diligently on our in-house fast LLMs inference infrastructure and gearing up for the training and serving of our own LLMs. Now, with @MetaAI paving the way by open sourcing LLaMA-2, many exciting opportunities have suddenly become available.
4
19
129
@denisyarats
Denis Yarats
4 years
We are releasing a well-tuned and miniature @PyTorch implementation of Soft Actor-Critic () together with @ikostrikov : . We test it on many continuous control tasks from the @DeepMind Control Suite and report the following results:
Tweet media one
1
28
130
@denisyarats
Denis Yarats
7 months
Excited to release our online LLMs! These models have internet access and perform very well on prompts that require factuality and up-to-date information. read more here:
Tweet media one
@perplexity_ai
Perplexity
7 months
We’re thrilled to announce two online LLMs we’ve trained: pplx-7b-online and pplx-70b-online! Built on top of open-source LLMs and fine-tuned to use knowledge from the internet. They are now available via Labs and in a first-of-its-kind live-LLM API.
59
242
2K
4
6
118
@denisyarats
Denis Yarats
4 months
perplexity runs on kubernetes 🙏
@kelseyhightower
Kelsey Hightower
4 months
I finally got a chance to play with @perplexity_ai and I can see how this is going to replace search engines for some people. Instead of a bunch of links, you get answers, and the sources they were derived from. Best part, no ads, but I'm sure that won't last forever.
Tweet media one
50
58
680
3
9
108
@denisyarats
Denis Yarats
6 months
Johnny Ho ( @randomjohnnyh ) was actually the main reason I joined Quora in 2013, instead of Facebook or Google. I didn't know him personally, but I had heard about him from Gena Korotkevich ( @que_tourist ), who lost to Johnny in his last IOI after three consecutive years of
Tweet media one
Tweet media two
@suleimenov
Arman Suleimenov
6 months
1/ @perplexity_ai , world’s first conversational answer engine, just raised $73M in Series B funding @ $520M valuation. Perplexity’s co-founder, @randomjohnnyh , is IOI 2012 Gold (perfect score: 600/600). In the middle (3rd from the left) of the first picture.
Tweet media one
6
12
292
1
3
88
@denisyarats
Denis Yarats
5 years
Excited to announce our new paper on Improving Sample Efficiency in Model-Free Reinforcement Learning from Images with @yayitsamyzhang , @ikostrikov , @brandondamos , Joelle Pineau, and Rob Fergus Paper: Code: [1/6]
Tweet media one
2
14
73
@denisyarats
Denis Yarats
18 days
nemotron-4-340B-Instruct is now on -- give it a shot!
Tweet media one
@ctnzr
Bryan Catanzaro
24 days
Nemotron-4-340B-Base: * Trained for 9T tokens on 6144 H100 GPUs * Using Megatron-Core at 41% MFU * 96 layers, 18432 hidden state * GQA, Squared ReLU
Tweet media one
5
7
49
3
12
73
@denisyarats
Denis Yarats
7 months
It's only been 1 year, can't believe the progress we've made, so happy to be a part of this journey 🚀 Congrats @perplexity_ai team!
@AravSrinivas
Aravind Srinivas
7 months
Today marks the first year anniversary of the launch of , launched on Dec 7, 2022. Lot of people ask me why we ended up being the best product in this category. If I had to pick one word: it is conviction. We believed world needed answers rather than links.
74
54
799
3
7
71
@denisyarats
Denis Yarats
3 years
Excited to see that Proro-RL got accepted to #ICML2021 !
@denisyarats
Denis Yarats
3 years
Happy to share my new work -- Proto-RL, a task-agnostic pre-training scheme that reconciles exploration and representation learning in image-based RL! with: @rob_fergus , Alessandro Lazaric, and @LerrelPinto . paper: code: [1/N]
Tweet media one
3
36
149
2
4
64
@denisyarats
Denis Yarats
8 months
Google now indexes @perplexity_ai 🚀 we are so back
Tweet media one
5
3
63
@denisyarats
Denis Yarats
3 months
@ylecun @AravSrinivas I think it is a bit exaggerated 🤣
1
2
63
@denisyarats
Denis Yarats
3 years
As a cherry on the cake, we will also have @ylecun as an invited speaker at our workshop!
@denisyarats
Denis Yarats
3 years
Thrilled to announce the Unsupervised Reinforcement Learning (URL) workshop at ICML 2021, where we bring together researchers to discuss challenges and possible solutions applying un(self-)supervised learning techniques to enhance RL agents. Website: 1/
Tweet media one
2
81
459
0
6
61
@denisyarats
Denis Yarats
1 year
Excited to announce our Series A funding and the launch of our mobile app!
@perplexity_ai
Perplexity
1 year
Announcing Perplexity AI’s iPhone app and series A funding! Perplexity provides instant answers and cited sources on any topic, now available on iPhone. With follow-up questions, voice search, and thread history, learn and explore faster than ever before.📱
66
214
950
5
4
58
@denisyarats
Denis Yarats
18 days
we've been testing sonnet 3.5 for the last several weeks and have been consistently impressed with the quality of the model. with this new model, we were able to meaningfully improve results on our internal benchmarks. excited for the perplexity users to give this model a spin,
@perplexity_ai
Perplexity
18 days
This model outperforms Claude 3 Opus and GPT-4o on our internal benchmarks.
Tweet media one
4
26
192
3
5
54
@denisyarats
Denis Yarats
9 months
API for all the models is coming out soon
@perplexity_ai
Perplexity
9 months
💥 Mistral 7B Instruct is available now. Try it free—
18
49
343
3
2
50
@denisyarats
Denis Yarats
9 months
🚀
Tweet media one
2
0
49
@denisyarats
Denis Yarats
7 months
Met someone at NeurIPS yesterday who is a power user of @perplexity_ai , aka tiktok for knowledge 🤯
Tweet media one
3
0
48
@denisyarats
Denis Yarats
11 months
It's hard to believe that it's been a year already since it all started! I couldn't ask for better co-founders and team 🚀 @AravSrinivas , @randomjohnnyh & @andykonwinski
Tweet media one
@perplexity_ai
Perplexity
11 months
🎂 Celebrating Perplexity's first birthday with a 25% discount on Pro for new subscribers! Use code ONEYEAR25 at checkout. 🔗 Unlock the most powerful AI research assistant with Perplexity Pro. Enjoy enhanced Copilot, GPT-4 access, unlimited uploads, and
11
14
80
4
2
47
@denisyarats
Denis Yarats
2 months
Extremely happy to work with @MParakhin again after all these years and to learn from his wisdom! Fun story: Back in 2011, during his first stint at Bing, Mikhail went on a recruiting trip to Moscow and interviewed many people from Yandex. One of those people was me, and that is
@AravSrinivas
Aravind Srinivas
2 months
Excited to share that we have three new advisors joining Perplexity to help us across search, mobile, and distribution: @emilmichael - former CBO of Uber, @richminer - cofounder of Android, advisor to Google, and @MParakhin - the former CEO of Bing, who launched Bing Chat and
Tweet media one
31
40
600
0
0
46
@denisyarats
Denis Yarats
1 year
@AravSrinivas @minimaxir the fundamental issue is that these libraries (langchain, llama index, etc) are trying to do too many things and have to succumb to a very general interface that is just too rigid for any practical use case, especially in production. they are, however, very useful for educational
2
5
44
@denisyarats
Denis Yarats
9 months
it is interesting how we have double standards for LLMs and general search (e.g. Google). I'm sure you can find all kinds of nasty stuff if you google hard enough and people are OK with this, but if an LLM returns something controversial to a *controversial* prompt it is a
@ylecun
Yann LeCun
9 months
An interesting example of how overzealous safety tuning can reduce the usefulness of AI tools.
44
121
977
4
2
44
@denisyarats
Denis Yarats
5 months
both Gemma 7b and 2b are up on , give it a shot
Tweet media one
@JeffDean
Jeff Dean (@🏡)
5 months
Introducing Gemma - a family of lightweight, state-of-the-art open models for their class, built from the same research & technology used to create the Gemini models. Blog post: Tech report: This thread explores some of the
Tweet media one
106
827
4K
4
3
44
@denisyarats
Denis Yarats
3 years
@MishaLaskin wrote a nice blog post about our recent work on establishing benchmarking for Unsupervised RL, please take a look:
Tweet media one
0
9
42
@denisyarats
Denis Yarats
3 months
dbrx-instruct by @DbrxMosaicAI is available on , enjoy!
Tweet media one
@jefrankle
Jonathan Frankle
3 months
Meet DBRX, a new sota open llm from @databricks . It's a 132B MoE with 36B active params trained from scratch on 12T tokens. It sets a new bar on all the standard benchmarks, and - as an MoE - inference is blazingly fast. Simply put, it's the model your data has been waiting for.
Tweet media one
33
267
1K
0
3
43
@denisyarats
Denis Yarats
11 months
Added CodeLLaMa-34B-instruct to , give it a spin! Thanks for the great work @MetaAI , @ylecun , @jnsgehring , @syhw et al.
@perplexity_ai
Perplexity
11 months
Code LLaMA is now on Perplexity’s LLaMa Chat! Try asking it to write a function for you, or explain a code snippet: 🔗 This is the fastest way to try @MetaAI ’s latest code-specialized LLM. With our model deployment expertise, we are able to provide you
28
168
695
1
6
40
@denisyarats
Denis Yarats
4 years
People have been asking us whether our (together with @ikostrikov and @rob_fergus ) data augmentation techniques (DrQ ) can also improve sample efficiency in Atari? The answer is YES! See the tweet chain for further information. [1/N]
Tweet media one
1
5
39
@denisyarats
Denis Yarats
5 months
top10 🚀
Tweet media one
3
1
38
@denisyarats
Denis Yarats
4 years
We extend our data augmentation techniques from DrQ () to on-policy setting which results to much improved generalization on ProcGen! Now DrQ is proven to be effective for a variety of RL algorithms, both off and on policy!
@robertarail
Roberta Raileanu
4 years
Excited to share our new paper “Automatic Data Augmentation for Generalization in Deep Reinforcement Learning” w/ @maxagoldstein8 , @denisyarats , @ikostrikov , and @rob_fergus ! Paper: Code: Website:
2
51
224
1
8
35
@denisyarats
Denis Yarats
6 months
@CNBCTechCheck
TechCheck
6 months
AI-powered search startup @perplexity_ai is looking to take on $GOOGL, raising a $70M+ Series B at a $520M valuation. But Google has left no shortage of challengers in its wake. @dee_bosa spoke to Perplexity CEO @AravSrinivas on why this time is different.
4
11
54
0
0
35
@denisyarats
Denis Yarats
21 days
🇯🇵
@AravSrinivas
Aravind Srinivas
21 days
Excited to be partnering with @SoftBank to grow Perplexity in Japan!
Tweet media one
102
120
3K
1
2
35
@denisyarats
Denis Yarats
5 months
codellama-70b-instruct is now available on
Tweet media one
@ylecun
Yann LeCun
5 months
The new breed of CodeLlamas comes with source, like their predecessor.
9
44
375
2
3
35
@denisyarats
Denis Yarats
7 months
What a year it has been! We want to extend our thanks to everyone who has supported us. As a token of our appreciation, we are offering two months of Perplexity Pro for free. Happy Holidays! We can't wait for 2024!
@perplexity_ai
Perplexity
7 months
We've made incredible progress, thanks to you all who have trusted us and provided feedback and support! To thank you, we're offering: Two Months of Perplexity Pro, for free: or use code HOLIDAYS23 in the next 10 days. Happy Holidays!
Tweet media one
57
180
1K
2
1
34
@denisyarats
Denis Yarats
3 years
@ylecun @ylecun @AravSrinivas recent Proto-RL () that uses ideas from both CURL and DrQ gets us closer to the LeCake, as it's contrastively learned representations are **fully detached** from RL, besides they can be learned with just the MaxEnt objective.
1
9
33
@denisyarats
Denis Yarats
3 months
Now available on
@ylecun
Yann LeCun
3 months
🥁 Llama3 is out 🥁 8B and 70B models available today. 8k context length. Trained with 15 trillion tokens on a custom-built 24k GPU cluster. Great performance on various benchmarks, with Llam3-8B doing better than Llama2-70B in some cases. More versions are coming over the next
Tweet media one
222
1K
7K
3
2
33
@denisyarats
Denis Yarats
7 years
All translation on Facebook is done via neural networks now, this includes our ConvS2S! More details:
0
11
30
@denisyarats
Denis Yarats
6 months
what a surprise to see @perplexity_ai on the @Nasdaq billboard!
Tweet media one
1
1
30
@denisyarats
Denis Yarats
2 months
high praise
@SquawkCNBC
Squawk Box
2 months
Investor Stanley Druckenmiller talks about his investment in AI platform Perplexity and says, "It's unbelievable. It's an answer machine. Nothing like I've ever seen."
20
93
585
2
1
30
@denisyarats
Denis Yarats
5 months
incredible to see @perplexity_ai being feature on the @nytimes front page.
Tweet media one
2
3
30
@denisyarats
Denis Yarats
7 months
Try out our 70-billion-parameter model, which has been fine-tuned to work effectively with our search index and is fully served in-house. Our internal evaluations indicate that this model outperforms GPT-3.5-Turbo in several important aspects, including factuality, conciseness,
@perplexity_ai
Perplexity
7 months
Perplexity Pro users can now pick the recently announced in-house model of Perplexity (pplx-70b-online) as the model of their choice! According to human evals, this model is more factually accurate, helpful, concise, and less moralizing than GPT-3.5-turbo for web searches!
17
40
302
4
1
28
@denisyarats
Denis Yarats
8 months
streaming a 7b model at 220 tokens/sec 🔥 try it out at
Tweet media one
3
0
29
@denisyarats
Denis Yarats
1 year
Excited to roll this out! Not the optimal configuration just yet, but should be a good way to start. We will greatly improve latency and throughout over the next week or so. Stay tuned!
@perplexity_ai
Perplexity
1 year
🚨 LLaMa-2-70B-Chat is available now! Try it here: 💬 What will you ask first?
42
190
814
1
1
27
@denisyarats
Denis Yarats
9 months
Excited to release our LLM API! It's been an exciting journey to build the infra from the ground up and scale it to be able to serve billions tokens daily.
@perplexity_ai
Perplexity
9 months
Introducing pplx-api, our LLM API which serves Mistral and Llama2 models with blazing speed and throughput. pplx-api is in public beta for our Pro subscribers! We partnered with @nvidia and @awscloud to build our proprietary inference. Learn more:
Tweet media one
24
74
475
1
1
26
@denisyarats
Denis Yarats
1 year
Our goal at @perplexity_ai is to provide information that you can trust. It is a difficult problem, but we are working hard to ensure that our UX, search, and LLM all work together to provide accurate and reliable information. Today, we're taking another step in that direction:
@perplexity_ai
Perplexity
1 year
Introducing a new editing tool for : you can now edit answers by either adding sources for more perspectives or deleting irrelevant sources. With just a few clicks, you can add context, remove incorrect information, and curate trustworthy answers:
21
40
298
0
2
26
@denisyarats
Denis Yarats
9 months
Very cool to see that Google itself chose to use @perplexity_ai as a worthy baseline 🚀 More importantly, this paper shows how much alpha there is in combining vanilla search with an LLM
@cto_junior
TDM (e/λ)
9 months
Interesting, a paper by google that evaluated @perplexity_ai on their new eval. It performs really well for factual Q&A and debunking, better than vanilla google search.
Tweet media one
7
25
268
3
1
25
@denisyarats
Denis Yarats
6 months
@sadlyoddisfying @perplexity_ai we don't control this, this seems like an iOS bug, could you please restart your iphone.
1
1
24
@denisyarats
Denis Yarats
1 month
now @WSJ runs LLM evals too 🤯
Tweet media one
3
5
24
@denisyarats
Denis Yarats
8 months
yikes on the other hand, @OpenAI API has been flawless since all this started...
Tweet media one
2
1
22
@denisyarats
Denis Yarats
7 months
@AravSrinivas @ylecun if in doubt remember this
Tweet media one
2
0
23
@denisyarats
Denis Yarats
3 years
Today at 12PM PDT together with @ikostrikov we will be giving a spotlight presentation on DrQ () at ICLR 2021 (Oral Session 8). Consider attending if you want to lear more about our work.
Tweet media one
0
4
23
@denisyarats
Denis Yarats
1 year
I certainly didn't expect to see our users complaining about our LLaMA-2 inference () being too fast for them. I suppose we have to make it slower. 🤔
Tweet media one
4
0
22
@denisyarats
Denis Yarats
1 year
70B is on the way!
@perplexity_ai
Perplexity
1 year
It's live! LLaMa-13B is now available on
16
36
215
3
0
22
@denisyarats
Denis Yarats
1 year
1
0
22
@denisyarats
Denis Yarats
3 months
@ylecun @AravSrinivas iirc the demo that did the trick was to ask Perplexity's BirdSQL who are the people who follow Yann but don't follow Gary Marcus on Twitter 🤣
2
1
22
@denisyarats
Denis Yarats
1 year
I'm super excited about @Perplexity Copilot. It offers a completely new way of web searching via an interactive interface filled with rich UI components, not just text. Copilot browses and analyzes tens of web pages on your behalf, ultimately producing a thoroughly researched
@perplexity_ai
Perplexity
1 year
The next iteration of Perplexity has arrived: Copilot, your interactive AI search companion. 🚀🤖 Perplexity Copilot guides your search experience with interactive inputs, leading you to a rich, personalized answer, powered by GPT-4. Try it for free at
94
345
2K
0
3
21
@denisyarats
Denis Yarats
7 months
the mistral-medium, the strongest @MistralAI 's LLM, is now available on
Tweet media one
1
0
19
@denisyarats
Denis Yarats
11 months
@AviSchiffmann use instead
1
0
21
@denisyarats
Denis Yarats
4 months
Perplexity "Discover Daily" is top 10 in News on Apple Podcast 🚀
Tweet media one
2
0
21
@denisyarats
Denis Yarats
6 months
👀
Tweet media one
0
0
20
@denisyarats
Denis Yarats
1 year
Exciting news! We've just launched a series of updates, including a user sign-up feature. This will allow us to enhance personalization and enable us to deliver advanced features to our users! 🚀
@perplexity_ai
Perplexity
1 year
We are excited to launch the next version of ! Introducing login, threads, focus search, improved formatting, and more. 🎉 Login now to start collecting your own library of threads. Keep them to yourself, or share your latest discovery. You decide.
30
121
636
1
1
19
@denisyarats
Denis Yarats
3 months
You can try it out on (it is prompted base model, postrained model is in progress)
Tweet media one
@MistralAI
Mistral AI
3 months
RELEASE 0535902c85ddbb04d4bebbf4371c6341 lol
29
32
897
0
3
20
@denisyarats
Denis Yarats
3 months
one-click perplexity
@ATXsantucci
Joe Santucci
3 months
I use @perplexity_ai so much it has a dedicated button on my mouse. #SmartActions on Logi Options+ means all I have to do is highlight text, press my button and let Perplexity do the rest.
Tweet media one
8
8
122
3
0
21
@denisyarats
Denis Yarats
3 years
If you are interested to learn more about unsupervised pre-training in image-based RL, consider to attend our #ICML2021 poster session at 9am PST / 12pm PST.
@denisyarats
Denis Yarats
3 years
Happy to share my new work -- Proto-RL, a task-agnostic pre-training scheme that reconciles exploration and representation learning in image-based RL! with: @rob_fergus , Alessandro Lazaric, and @LerrelPinto . paper: code: [1/N]
Tweet media one
3
36
149
0
0
20
@denisyarats
Denis Yarats
5 months
someone created a Wikipedia page for @perplexity_ai
Tweet media one
1
1
20
@denisyarats
Denis Yarats
7 months
3
1
19
@denisyarats
Denis Yarats
5 months
@teja2495 @perplexity_ai haven't shipped all the recent improvements yet, additional 100ms latency reduction is coming. also iOS is about to be an instant answer machine.
1
0
19
@denisyarats
Denis Yarats
1 year
@MichaelRoyzen not true, also not a great strategy to speak negatively about competitors. p.s. running fine-tunned flan-t5 in house with Triton is not that hard, so I wouldn't really brag about it :D
3
0
18
@denisyarats
Denis Yarats
6 months
@ylecun @AravSrinivas thanks @ylecun for your support from the very beginning! I still remember how @AravSrinivas and I were showing your our first twitter search demo back at NYU 🤣
1
0
17
@denisyarats
Denis Yarats
4 years
In the end it is more important that your work is actual being used by others that having it accepted at a ML conference.
@mangahomanga
Homanga Bharadhwaj
4 years
@jachiam0 Another paper that comes to mind is by @denisyarats @yayitsamyzhang Kostrikov @brandondamos Pineau @rob_fergus Although it was rejected from ICLR last year (novelty!), it inspired a host of pixel-based RL algorithms like CURL, RAD, DrQ...this year
1
4
18
0
2
18
@denisyarats
Denis Yarats
4 months
the gmail creator is speaking
@paultoo
Paul Buchheit
4 months
When I joined Google in 1999, I didn't understand how it was possible to win vs much larger and better funded competitors such as Altavista, but it seemed like a good opportunity to learn. What I learned is that if you're in the lead, and you're moving faster than everyone else,
74
170
2K
0
0
18
@denisyarats
Denis Yarats
3 years
Please join us on our ICML workshop on Unsupervised RL today at 8:45 EST. We have an amazing list of speakers and oral presentations, as well as a poster session! ICML: Website:
@denisyarats
Denis Yarats
3 years
Thrilled to announce the Unsupervised Reinforcement Learning (URL) workshop at ICML 2021, where we bring together researchers to discuss challenges and possible solutions applying un(self-)supervised learning techniques to enhance RL agents. Website: 1/
Tweet media one
2
81
459
0
2
17
@denisyarats
Denis Yarats
1 year
I'm especially excited to use this to read articles on ! Try it out!
@perplexity_ai
Perplexity
1 year
👋 Say hello to a more personalized browsing experience with our updated Chrome extension! can now provide answers focused on the page or website you're currently looking at. No more sifting through irrelevant search results:
31
125
703
0
2
17
@denisyarats
Denis Yarats
6 months
thank you for the high praise!
@tobi
tobi lutke
6 months
and its app have definitely replaced my google usage for now. It’s pretty incredible.
136
138
2K
3
0
15
@denisyarats
Denis Yarats
1 year
@soumithchintala rlhf doesn't need a lot of data, discrimination is much more sample efficient than generation...
1
0
16
@denisyarats
Denis Yarats
2 years
Another update from our team on how to use Bird SQL to analyze Twitter data.
@perplexity_ai
Perplexity
2 years
With Perplexity, you can search over databases. But also do more! Visualize & Summarize results as aggregate stats, plots Everyone gets to analyze data from natural language. No prerequisite knowledge of SQL or plotting libraries necessary. Link:
16
71
452
1
1
16
@denisyarats
Denis Yarats
3 years
Key insights while working on DrQ-v2: 1. n-step returns with DDPG is much better than SAC. 2. 10x bigger replay buffer is needed for better results. 3. decaying variance of exploration noise allows to gain the last bit of performance. 2/N
Tweet media one
1
0
16
@denisyarats
Denis Yarats
11 months
🚀
@apostraphi
Phi Hoang
11 months
. @perplexity_ai made it on the @TODAYshow as one of the AI tools that can help make your life easier. ☀️🤯
Tweet media one
0
1
14
1
0
16
@denisyarats
Denis Yarats
2 years
We've built another cool search interface, this time over structured data. Please take a look!
@perplexity_ai
Perplexity
2 years
Introducing Bird SQL, a Twitter search interface that is powered by Perplexity’s structured search engine. It uses OpenAI Codex to translate natural language into SQL, giving everyone the ability to navigate large datasets like Twitter.
226
2K
9K
0
2
15
@denisyarats
Denis Yarats
4 years
Check out our open soursed code for DCEM and some cool examples of its usage:
@brandondamos
Brandon Amos
4 years
With @denisyarats , we've released the @PyTorch code and camera-ready version of our #ICML2020 paper on the differentiable cross-entropy method. Paper: Code: Videos: More details in our original thread:
1
43
134
0
0
14
@denisyarats
Denis Yarats
3 years
We have an exciting set of speakers ( @pabbeel , @KelseyRAllen , @xkianteb , @chelseabfinn , @hardmaru , @danijarh , @rosemary_ke , @alelazaric ) that will bring a diverse perspective on the challenges of unsupervised exploration and representation learning in RL. 2/
1
0
15
@denisyarats
Denis Yarats
4 months
Thanks Karri & the team for building @linear , we can't live without it!
@karrisaarinen
Karri Saarinen
4 months
Was trying to find a story about a director who always wanted to change one thing. The designer added a duck to the game character so it'd be the most obvious thing to change. Google failed horribly. Tried lot of ways and keywords. @perplexity_ai found it in one single ask
Tweet media one
Tweet media two
25
10
239
0
0
15
@denisyarats
Denis Yarats
4 months
a thought provoking point by @adamdangelo
@a16z
a16z
4 months
CEO of @Quora & @Poe_Platform @adamdangelo argues that startups are uniquely positioned to provide a level of fault tolerance unattainable by incumbents. He & @DavidGeorge83 discuss getting AI infrastructure to the masses & its multi-model, multi-modal future.
11
16
95
0
3
15
@denisyarats
Denis Yarats
5 months
Now, you can set @perplexity_ai as the default search engine on the amazing @arc browser!
@browsercompany
The Browser Company
5 months
Your search engine, your internet @perplexity_ai is now a default search engine option 🌐 Live in Arc 🌐
73
98
2K
1
0
13
@denisyarats
Denis Yarats
4 years
We identify that overfitting is a major problem in RL from pixels, as increased model capacity negatively correlates with performance. This makes sense, as an off-policy RL initially trains on a very small replay buffer and can easily overfit on it: [3/N]
Tweet media one
1
1
14
@denisyarats
Denis Yarats
3 years
Pre-training strikes again!
@LerrelPinto
Lerrel Pinto
3 years
Excited to release some of our latest work on visual imitation! We surprisingly find that if you have the right visual representation, even simple k-NN regression works well for both offline imitation and real robotic tasks. paper+code+data: (1/5)
2
46
237
0
4
13
@denisyarats
Denis Yarats
7 months
the OpenAi board meeting featuring shoggoth is in progress
Tweet media one
0
0
13
@denisyarats
Denis Yarats
2 years
An awesome new algorithm for unsupervised RL from @MishaLaskin et al, a great addition to our URLB library of agents ()!
@MishaLaskin
Misha Laskin
2 years
New paper on unsupervised skill discovery - Contrastive Intrinsic Control. Tl;dr exploration with contrastive skill learning substantially improves prior skill discovery methods (by 1.8x)! Achieves leading unsupervised RL results. Learn more 👇 1/N
Tweet media one
2
20
106
0
2
14