It's official, I am a Kaggle competitions Grandmaster (probably 300th one?), also my global rank has improved to
#14
.
It's time I start contributing more to the Data science community.
Happy to share that I’m starting a new position as Research Intern - Deep learning at Nvidia!! Another 2022 resolution is completed. 😬
Kaggle Competitions GM is left 😅😅
I finally made it!!!
#3
on private LB, at last won the left solo gold to become a
@kaggle
Grandmaster. It took me more than a year of struggle to finally get it. Today, I feel a bit proud of myself. Thanks everyone for supporting.
After months of effort, glad to share we ranked
#1
in the RSNA competition on Kaggle, after the hubmap solo win, happy to see myself on top in another medical imaging competition🙂🙂
A brief summary of the solution:
Personal Update: I am extremely delighted to join as a Data Scientist. Working with the largest Kaggle GM team is definitely a dream come true😄
Really thankful to
@vopani
,
@bhutanisanyam1
& Kaggle teammates who guided me throughout the journey.
Very proud to share that I reached top 7 in Kaggle global rankings.
One step closer to the dream of becoming No 1 someday. Glad to be part of the GMs team where three out of the top 6 are colleagues of mine, this makes it more encouraging to work hard and surpass them 😄😄
[My 1st Thread] 2021 Achievements and Plans for 2022😁
- Top 25 ranking in Kaggle competitions
- 1000+ hours on Kaggle[Embarrassed]🥲
- Authored in 4 papers
- 3🥇 8🥈 2🥉
- G2Net-4th ,Covid19-5th ,BMS-7th Place🥇
- Worked with some of the best kagglers including
@abhi1thakur
Released our 1st place solution and training code for RSNA 2023.
It was a challenging competition where barely 80 teams out of 1000s managed to improve the baseline solution which scored a weighted logloss around 0.65; we reached 0.35 using a multistage approach involving 3D
My Kaggle Stats in 2023 (special year 😄)
I became GrandMaster & Won 4 golds, 5 solo silver medals this year. Also reached top 10 in Kaggle rankings.
🥇 1st Place: RSNA 2023 Abdominal Trauma Detection
🥇 3rd Place(solo): HuBMAP - Hacking the Human Vasculature
🥇 9th Place:
The knowledge I have gained in the past two years through the Kaggle competitions has been enormous. I have learned a lot from the community through notebooks and discussions. I think it's the right time for me to also begin actively contributing to Kaggle and open source.
I'm elated to be joining
@h2oai
as Data Science Intern. Thanks to
@vopani
and
@bhutanisanyam1
for being so helpful throughout the process.
Looking forward to a great learning experience from them and one of the best Kaggle Grandmasters team😄🚀🚀
I noticed significant improvement in finetuning LLMs for tasks like NER, upon using Bi-LSTMs head which address the causal attention very well. I tried using it with h2o-danube(1.8B) model in an ongoing Kaggle competition & it works like a charm
full read:
Missed solo gold with a little margin!! I am a little sad because it was my best submission in cv - public lb as well as private lb. Yet, learning was immense and I know I gave up my best despite having little knowledge in NLP.
Gotta wait a little more to become GM haha. No rush.
I mentioned about actively start contributing to the data science community once I become Kaggle GrandMaster. sharing my 3rd place solution, nothing too fancy but this competition proved how computer vision research is still progressing, not just LLMs
What is the highest number of models you ever used in ensemble or stacking for any ml project/competition? For me its 489 models in Melanoma Competition 2 years back.
#kaggle
Why is my Twitter feed always filled with random college students ranting about their daily lives? Show me posts from machine learning communities and people I follow.
Just tried out YOLOv8 models for the first time in a detection competition hosted on
@huggingface
by
@abhi1thakur
, nice experience. I have also released an end-to-end training and submission notebook as a baseline 🤗🤗
Simple synthetic data to improve LLM alignment! 🍝
@JerryWeiAI
studied an interesting tendency that arises with Large Language Models: Sycophancy. The tendency to agree with factually incorrect user opinions.
The paper solved the challenge with simple synthetic data:
-
Finished 33rd in UWMGIT Competition on Kaggle. Another bad shakedown in LB snatching away solo gold, Sometimes it gets hard to figure out how you overfit Public LB 😭
Want to thank
@jarvislabsai
for providing high-memory GPUs, they were really helpful in training 3D models.
Managing multiple
@kaggle
competitions hihihi , real fun🥴🥴. I am uncertain about which competitions to collaborate with a team and which ones to tackle solo.
Inspired by
@bhutanisanyam1
Since 2021 every year I share a resolution list for new year and share what I accomplished in last year.
Become Kaggle Competitions GrandMaster ☑️
Work full time as Data Scientist ☑️
Lose 20 kgs bodyweight (~15 kg) 🟡
Publish 10 notebooks ❌ (4)
Read
Sharing my approach in the recent Kaggle competition I participated in (U.S. Patent Phrase to Phrase Matching).
Although I didn't perform well, I believe the solution is a little unique from other top scorers :))
Kaggle Days Delhi! 🚀
I sincerely enjoyed speaking at Kaggle Days Delhi about State of LLMs,
@h2oai
LLM Studio and h2oGPT and sharing some notes from the LLM Roadmap.
It was an honour having chai with everyone. The India AI community continues to grow at an insane speed 🙏
You
Really impressed by the dedication of my university freshers, over 65+ teams participated in a three-day-long society contest I hosted on
@kaggle
Surprisingly they all started learning ml 1-2 weeks before and yet almost 90% of the teams were able to beat the baseline code score
Just realized I have exact 3000 saved/unsaved notebooks on Kaggle. I wonder how many do old members and GMs have?
I hope Kaggle doesn't ban me after seeing this 🤪
Yayy!!! Now I'm winning
@abhi1thakur
's UltraMNIST competition.
Victory over
@tunguz
yet again!!
Do participate if you want to win Nvidia's RTX 3090 Ti
🚀 Today, after losing 4 times in a row, we were 1st in the 5th kaggledays championship competition and we will be going to Barcelona to compete in the finals! Team name: baazigar. Haar kar jeetne wale ko baazigar kehte hain ;) Great teamwork! 🎉🎉🎉
2 more days to go in HubMap, still >3% better than 2nd place on public leaderboard, 6% higher than last gold place. I hope it's enough to get me a gold even after large shakeup............
A master class in creating Synthetic datasets with LLMs! 🐐
ToolLLM paper has been popular for creating the strongest API following models.
I think there’s an incredibly underrated side to it, here’s my summary:
- The paper aims to improve API following capabilities of open
I realized that I became notebook's master just by publishing winning solution inference notebooks and dirty codes :))
I will try my best to publish cleaner notebooks😬😬
@fooobar
Durr se dekha toh cvpr dikh raha hai..
Durr se dekha toh cvpr dikh rha hai …
Paas aakr dekha toh local conference me bhi paper reject ho rha hai.
** Other Achievements in 2021 ** [2]
- Bonding with
@ShmGupta
@Tiwarihere
@singh_tanul
- Gave several talks including the special one with
@bhutanisanyam1
- Benchpress 200 lbs🥲
- Failed in few competitions, learning ++
- Started little Socialising
- Mentored few students
Another Kaggle competition ended.
I had shared a benchmark that ended up at 0.70. Winning score: 0.75!!!
Many winning solutions & top scores used my benchmark code. Many teams joined because of that. Maybe that code extended the gold zone by 1 or 2 ranks! 1/2
** Goals for 2021 ** [3]
- Become one of the youngest Kaggle Grandmaster
- Compete in Barcelona [Kaggle days]
- Research internship
- Improve sleep cycle
- Read more books
- Improve physical fitness [Able to 3+ pull ups😩]
- Improve grades in university
Congratulations to
@bhutanisanyam1
for becoming a Kaggle Grandmaster! 🙌
Your contributions to the ML community on & off Kaggle have been invaluable.
It's high time you celebrate with some 🍺 tonight & leave that 🫖 for your next achievement 🙂
@probablyaayushi
Who said algebra won't be used in real life? machine learning is completely based on it. This means almost every technology coming these days has an algebraic equation behind it.
Gold Number 3 is here ,Our Team Finished 8th in the recently concluded HappyWhale competition
2 more to go now 🤞
It was a beautiful team effort
@__hsuya
@shubh_KS08
@nischay_twt
Congrats KS on becoming a Grandmaster
Time-Series Specialist,
@tng_konrad
, has kindly agreed to collaborate with me on a series of tutorials on time-series analysis. If you are interested in learning about time-series, you cannot miss this! Join MLSpace Discord for Q&A and further info:
Wow, I am unable to delete datasets on
@kaggle
now, and because of that I can't upload new models without making them public 🥲🥲🥲
It's been 12 hours already and I couldn't make a submission in HubMap.
100 gb of private datasets quota is frustrating enough already😭