Phind Profile
Phind

@phindsearch

8,815
Followers
21
Following
6
Media
101
Statuses

AI answer engine for complex questions.

San Francisco
Joined June 2022
Don't wanna be here? Send us removal request.
@phindsearch
Phind
1 year
We beat GPT-4 on HumanEval with fine-tuned CodeLlama-34B! Here's how we did it: 🚀 Both models have been open-sourced on Huggingface:
28
173
851
@phindsearch
Phind
10 months
Our GPT-4-beating coding model is now the default on . It's also 5x faster than GPT-4. Learn more in our blog post:
20
59
483
@phindsearch
Phind
7 months
Introducing Phind-70B, our largest and most capable model to date! We think it offers the best overall user experience for developers amongst state-of-the-art models.
32
68
454
@phindsearch
Phind
10 months
@elonmusk Seems to do just fine @elonmusk
Tweet media one
16
12
196
@phindsearch
Phind
10 months
🚀 While ChatGPT is pausing signups, Phind continues to be better at programming while being 5x faster. We’ve been rapidly adding capacity and it’s only getting faster. Check it out ➡️
15
16
120
@phindsearch
Phind
11 months
Phind has a new look! It's simpler, cleaner, and helps you focus on what matters. There's no easier way to get your technical questions answered than with Phind.
Tweet media one
11
9
117
@phindsearch
Phind
4 months
GPT-4o is now available in Phind for all paid users!
20
7
97
@phindsearch
Phind
9 months
Announcing much faster Phind Model inference for Pro and Plus users. Your request will be served by a dedicated cluster powered by NVIDIA H100s for the lowest latency and a generation speed of up to 100 tokens per second. If you’re not yet a Pro user, join us at
39
4
84
@phindsearch
Phind
1 year
Introducing our VS Code extension! Phind now knows when to search the web or your codebase -- no more tab switching.
8
10
85
@phindsearch
Phind
2 years
With Hello, get an answers and code snippets instantly using AI. To any question. Hover over the answer to see cited sources. And, it's not built on top of GPT -- we use our own proprietary models.
3
15
75
@phindsearch
Phind
7 months
@ali_moiz @paulg @voicesz_ Yep, and the main logic board seemed to be damaged too. Our compute provider replaced the entire underlying host server under warranty.
2
0
68
@phindsearch
Phind
9 months
🚀 Introducing GPT-4 with 32K context for Phind Pro users. If you’re not yet a subscriber, join us at .
8
4
53
@phindsearch
Phind
2 years
Thank you, @paulg !
@paulg
Paul Graham
2 years
One of the most interesting new YC startups is a search engine for programmers: .
81
186
2K
3
2
38
@phindsearch
Phind
1 year
@rauchg Thanks for the shoutout! You should try enabling Expert mode, it's even better. And we are big fans of Vercel :)
1
0
33
@phindsearch
Phind
2 years
📣 Exciting news! 🚀 You can now save your search results and share them with others! 🔍👀 Keep a consistent history of previous threads with ease.
Tweet media one
3
2
30
@phindsearch
Phind
1 year
@WizardLM_AI All of your claims made here are false. 1. We launched our model first, and we've proven through S3 screenshots that the model we internally called "wizardcoder" was trained before your model. 2. We unequivocally trained our own models on our own data using our own methods. We
2
0
22
@phindsearch
Phind
1 year
@cloudstudio_es Meta trained a potentially similar model called “Unnatural Code Llama” that got 62% pass @1 . It’s in the paper but it’s not released. And they used only 15k examples while we used 80k.
1
0
22
@phindsearch
Phind
6 months
We are excited and proud to be a signatory of SV Angel's Open Letter on AI:
4
3
20
@phindsearch
Phind
7 months
Join us for our San Francisco meetup on February 6th! We’d love to meet you and hear about how we can keep making Phind better for you. And, of course, food and drinks will be provided :)
3
2
16
@phindsearch
Phind
1 year
We're hosting an in-person meetup in SF! Please RSVP through the link below by tonight if you're attending. WHEN: Thursday, April 6th, 6:30pm WHERE: Cloudflare HQ, 101 Townsend St RSVP:
2
0
16
@phindsearch
Phind
1 year
@WizardLM_AI There’s been a misunderstanding about our internal nomenclature we used to train our first model (which we released first). We appreciate your contributions to the community, but our models have been trained completely independently of yours. If you look at the thread, you’ll see
Tweet media one
2
0
15
@phindsearch
Phind
1 year
@WizardLM_AI It's simple: you're making these claims ALL based on us having a folder called "wizardcoder". But I've explained already that the original folder of the V1 Phind models had the word "wizardcoder" because the dataset's intentions were similar despite the methods and the dataset
2
0
14
@phindsearch
Phind
2 years
Hello is now ! Exciting updates coming soon.
0
3
9
@phindsearch
Phind
2 years
@elietoubiana @paulg For this question it's similar. But for more complex questions, especially those involving documentation or stack overflow/github issues, chances are that Phind can answer them better due to its internet access.
1
0
7
@phindsearch
Phind
1 year
@WizardLM_AI Thank you, but let’s get the facts straight. 1. We released our model first. We didn’t copy anybody. 2. We did not use evol-instruct actually. 3. We would’ve appreciated you reaching out to us privately with any concerns you might’ve had. If we had used any of your work
1
0
7
@phindsearch
Phind
2 years
Followup questions and suggested searches are saved too so you or someone else can continue where you left off.
2
0
5
@phindsearch
Phind
2 years
With Hello, get answers and code snippets instantly using AI. For any question. Hover over the answer to see cited sources. Regular Bing results are shown as well. And, it's not built on top of GPT -- we use our own proprietary models.
0
1
6
@phindsearch
Phind
1 year
We're hosting an another IN-PERSON MEETUP in SF🚀 Please RSVP via the link below if you're attending. WHEN: Thursday, May 4th, 6pm WHERE: Cloudflare HQ, 101 Townsend St, San Francisco RSVP: Can't wait to see you there!
1
0
6
@phindsearch
Phind
2 years
This caching system also allows you to keep a history of your previous searches on the left panel. Enjoy!
0
0
5
@phindsearch
Phind
10 months
@MatthewBerman For design-heavy problems, we suggest trying the “Ignore search results” option in the model dropdown under the search bar.
1
0
5
@phindsearch
Phind
1 year
@abacaj Have you tried using Phind for this?
0
0
4
@phindsearch
Phind
10 months
@SaleemUsama @paulg @HingumTringum For refactoring code, you might benefit from enabling “Ignore search results” from the model dropdown under the search bar.
1
0
4
@phindsearch
Phind
10 months
@AndrewVoirol @NickADobos Did you know you can use Phind in Cursor with our VS Code extension?
1
0
2
@phindsearch
Phind
2 years
@maskobuilds @paulg Our answers cite their sources, so there's less hallucination. And, unlike Google, we don't dumb down your queries before answering.
1
0
3
@phindsearch
Phind
2 years
@brockjelmore @paulg Thanks @brockjelmore ! We still have a long way to go, but we have some exciting stuff in the pipeline.
0
0
2
@phindsearch
Phind
10 months
@gabrielnocode @paulg Hi Gabriel, do you have any cached links you can share? Also, for certain types of design tasks it might be helpful to turn on “Ignore search results” from the model dropdown under the search bar
0
0
2
@phindsearch
Phind
10 months
@cpdough I suggest you try with “Ignore Search Results” enabled in the model dropdown. This is definitely something the Phind Model should be capable of.
1
0
2
@phindsearch
Phind
2 years
Hello from write-only Twitter! We won't be able to view or reply to any responses on this platform. However, we would love to chat with you on our lively Discord server. We host user hangouts every week, so join us at for a great time!
0
0
2
@phindsearch
Phind
2 years
@husseinsyed73 @paulg @DadabhoyOmar Be obsessed with understanding your users' problems.
1
0
2
@phindsearch
Phind
7 months
@ali_moiz @paulg Our hypothesis is that specialized models can beat generalist models for things users care about from a product perspective. For us it’s speed and code performance. Those focused on building AGI might consider such optimizations/tradeoffs a distraction :)
2
0
2
@phindsearch
Phind
2 years
@craigsuperstar @paulg Thanks for sharing the example -- we'll take a look and keep making it better.
1
0
1
@phindsearch
Phind
1 year
@ocolegro Thanks, we'll take a look and get back to you!
1
0
1
@phindsearch
Phind
10 months
@dwlance @paulg Hi Doug, do you mind sharing the cached link for the query?
0
0
1
@phindsearch
Phind
2 years
@SoliMouse @paulg That essay has been very inspirational for us.
0
0
1
@phindsearch
Phind
1 year
@codinginflow @ground1_c2 We’ve made a bunch of improvements recently and run GPT-4 as the default answering engine. Would love to have you try it again and hear your feedback.
0
0
1
@phindsearch
Phind
10 months
@Futur3_Th1nk3r @paulg Have you tried toggling “Ignore search results” for longer structured code gen tasks?
0
0
1
@phindsearch
Phind
2 years
0
0
1
@phindsearch
Phind
1 year
@ocolegro Odd, are you on the latest HF transformers main commit?
1
0
1
@phindsearch
Phind
10 months
@ihti @paulg Hi Ihti, I’d suggest enabling “Ignore search results” from the model dropdown under the search bar if your tasks is optimizing large code samples.
0
0
1