Nicolay Rusnachenko Profile Banner
Nicolay Rusnachenko Profile
Nicolay Rusnachenko

@nicolayr_

310
Followers
92
Following
190
Media
2,509
Statuses

Medical Multimodal NLP (🖼+📝) Research Fellow @BU_Research ・Information Retrieval・software developer ・PhD in NLP・Opinions are mine

Bournemouth / London, UK
Joined December 2015
Don't wanna be here? Send us removal request.
Pinned Tweet
@nicolayr_
Nicolay Rusnachenko
13 hours
📢 Today @BU_Research as a part of the weekly research seminars, 👨‍🎓 Xibin Bayes Zhou presents an advances in Language Models application for Protein. 🧬 Apparently, application of Language models to protein similar to NLP 🧵1/n
Tweet media one
1
0
1
@nicolayr_
Nicolay Rusnachenko
7 months
Excited for the first day workshops and talks at #ecir2024
Tweet media one
0
1
14
@nicolayr_
Nicolay Rusnachenko
7 months
Let's be honest, arranging the @ecir2024 banquet at "Glasgow City Chambers" was a fabulous decision to say the least 🍷✨ #ecir2024
Tweet media one
0
1
13
@nicolayr_
Nicolay Rusnachenko
7 months
This week we present ARElight system aimed at memory-effective structuring collections of large texts @ecir2024 demo track 📜💻 Keynotes and materials: Github: Poster: #ecir2024 #arelight #nlp #ir #sampling #graphs
Tweet media one
0
1
9
@nicolayr_
Nicolay Rusnachenko
6 months
That's a 💎 milestone on better synthetic data preparation practices! Wondered of it's application towards the low-resourse-domian SFT 👀
@arankomatsuzaki
Aran Komatsuzaki
6 months
Better Synthetic Data by Retrieving and Transforming Existing Datasets repo: abs:
Tweet media one
1
88
415
0
1
8
@nicolayr_
Nicolay Rusnachenko
7 months
Thanks for the fabulous night and celebration!
@ecir2024
ECIR2024
7 months
🍽️ The banquet is beginning at #ECIR2024 ! Wishing everyone a fantastic evening filled with great food, laughter, and wonderful conversations! 🎉🥂
4
4
33
0
0
9
@nicolayr_
Nicolay Rusnachenko
6 months
📢 Excited to share that our studies on LLMs reasoning capabilities in Target Sentiment Analysis are out 🎉 🧵1/n [More on finding highlights ...] #llm #reasoning #nlp #sentimentanalysis #cot #chainofthought #zeroshot #finetuning
Tweet media one
1
0
7
@nicolayr_
Nicolay Rusnachenko
6 months
So far experimented 🧪 with LLaVa 1.5 and Idefics 9B at scale and they are quite handy out-of-the-box 📦 Eventhough, it is nice to see even smaller versions are out, that based on most recent LLMs 👏👀
@Prince_Canuma
Prince Canuma
6 months
mlx-vlm v0.0.4 is here 🎉 New models 🤖: - Idefics 2 - Llava (Phi and Llama 3) Improvements 🚀: - Q4 quantisation support for all models - Less imports to use generate() Up next 🚧: - More models - Support for multiple images Please leave us a star and send a PR
Tweet media one
5
8
39
0
1
5
@nicolayr_
Nicolay Rusnachenko
5 months
@lucas__crespo Data leakage of course 🌊😎
0
0
5
@nicolayr_
Nicolay Rusnachenko
5 months
These findings 👇 on (1) reliability in news articles and (2) in the language models application for generating writer ✍️ feedback generation from the angle of readers 👀📃 view through personalities" are 💎 to skim.
@hen_drik
Hendrik Heuer
5 months
Excited for our #CHI2024 contributions (1/2): Reliability Criteria for News Websites, 16 May, 11:30am Writer-Defined AI Personas for On-Demand Feedback Generation, 15 May, 2:00pm #CAISontour
Tweet media one
5
1
38
0
1
3
@nicolayr_
Nicolay Rusnachenko
7 months
It is nice too see a small step towards the target-oriented LLM adaptation from the prospect of retrieval augmenting techniques and enhanced end-to-end adaptation of 🥞: (I) knowledge (ii) passages (iii) LLM
@ContextualAI
Contextual AI
7 months
Today, we’re excited to announce RAG 2.0, our end-to-end system for developing production-grade AI. Using RAG 2.0, we’ve created Contextual Language Models (CLMs), which achieve state-of-the-art performance on a variety of industry benchmarks. CLMs outperform strong RAG
Tweet media one
35
135
983
0
0
5
@nicolayr_
Nicolay Rusnachenko
7 months
@stevenhoi @hypergai Nice to see you publicly contribute to Multimodal AI advances, and LLM in particular 👏
0
0
4
@nicolayr_
Nicolay Rusnachenko
2 months
@rohanpaul_ai Have to admit that the prompt concepts, and at first glance, seemed to be outer this world 🤯🤔👏✨
1
0
4
@nicolayr_
Nicolay Rusnachenko
7 months
Text2Story workshop opener at #ecir2024 shares a handy methodology aligned studies for processing large texts such as books 📚 aimed at narratives extraction #nlp #story #books #narratives
Tweet media one
0
0
4
@nicolayr_
Nicolay Rusnachenko
6 months
@alan_karthi Well done and thank you for sharing this technical report! 👏 I believe that the access to the Med-Gemini is restricted due to the specifics for the medical domain as well as the result LLM. Nonetheless, is Med-Gemini available for chatting and under which license of so?
1
0
4
@nicolayr_
Nicolay Rusnachenko
3 months
0
0
4
@nicolayr_
Nicolay Rusnachenko
4 months
📢 Excited to share the details of our submission🥉 @SemEvalWorkshop Track-3, which is based on CoT reasoning 🧠 with Flan-T5 🤖, as a part of self-titled nicolay-r 📜. Due to remote avalablity at #NAACL2024 , presenting it by breaking down the system highlights here 👇 🧵1/n
Tweet media one
@SemEvalWorkshop
SemEval
4 months
@SemEvalWorkshop 2024 starts tomorrow! Check out our exciting lineup of 65 posters and 10 talks here: Don’t miss our invited talks by @hengjinlp (with @_starsem ) and @andre_t_martins ! #mexico @naaclmeeting @shashwatup9k @harish @seirasto @giodsm
0
4
7
1
0
4
@nicolayr_
Nicolay Rusnachenko
5 months
The most recent OmniFusion VLLM in which authors adopt merging-features for CLIP-ViT-L and DINOv2 was such an impressive 🔥 and powered by Mistral-7B. This makes me wonder, how far the another 💎 concept for images encoding 👇that involves MoE goes ... 👀
@mervenoyann
merve
5 months
it's raining vision language models ☔️ CuMo is a new vision language model that has MoE in every step of the VLM (image encoder, MLP and text decoder) and uses Mistral-7B for the decoder part 🤓
Tweet media one
3
60
300
1
0
4
@nicolayr_
Nicolay Rusnachenko
6 months
Such tools like this end up becoming a swiss knife for deeper understanding and looking 👀 on how one LLM differs from other 💎
@mahnerak
Karen Hambardzumyan
6 months
[1/7] 🚀 Introducing the Language Model Transparency Tool - an open-source interactive toolkit for analyzing Transformer-based language models. We can't wait to see how the community will use this tool!
Tweet media one
3
53
216
0
0
4
@nicolayr_
Nicolay Rusnachenko
6 months
@Leik0w0 @Prince_Canuma Interesting! ... any other prospects on necessity of quantized siglip besides its adaptation for Moondream tiny VLLM?
3
0
1
@nicolayr_
Nicolay Rusnachenko
6 months
@burkov Notably, the generated one tends to be verbosely commented so that skimming through comments for the brief sure of it's correctness
1
0
2
@nicolayr_
Nicolay Rusnachenko
9 months
@drivelinekyle @abacaj This template varies from task to task, but the impelementation on torch is pretty much is similar. My personal experience is sentiment analysis, so I can recommend:
0
0
3
@nicolayr_
Nicolay Rusnachenko
7 months
@omarsar0 After a brief paper skimming of the main figure, the ranking idea finds me in such a unique way of reasoning enhancing. Thanks for sharing 👏
0
0
1
@nicolayr_
Nicolay Rusnachenko
6 months
@bindureddy Thanks for sharing it! 👏👀
0
0
3
@nicolayr_
Nicolay Rusnachenko
4 months
🤔 Coming to this from the prospect of IR from large textual data, becoming wondered on how it would be interesting to see such a forecasting for extracted stories (series of events) from literature novel books 📚
@chenchenye_ccye
Chenchen Ye
4 months
📢New LLM Agents Benchmark! Introducing 🌟MIRAI🌟: A groundbreaking benchmark crafted for evaluating LLM agents in temporal forecasting of international events with tool use and complex reasoning! 📜 Arxiv: 🔗 Project page: 🧵1/N
15
72
304
1
0
3
@nicolayr_
Nicolay Rusnachenko
5 months
@HongyiWang10 @RutgersCS Congratulations, all the best on the Assistant Professor role 👏✨
1
0
2
@nicolayr_
Nicolay Rusnachenko
7 months
@_kire_kara_ @barbara_plank @IAugenstein Congratulations, well done! 👏
0
0
2
@nicolayr_
Nicolay Rusnachenko
2 months
@zouharvi Brilliant idea with cards that showcase the contribution and experiments outcome 👏
0
0
3
@nicolayr_
Nicolay Rusnachenko
2 months
@HopeYanxu Congratulations! 👏
0
0
3
@nicolayr_
Nicolay Rusnachenko
6 months
@yuqirose Congratulations! 👏
0
0
1
@nicolayr_
Nicolay Rusnachenko
2 months
📢Research presentation alert🚨 Nearly in 20 days ready to share personal findings in LLMs application for reasoning 🧠 in sentiment analysis task. Inviting everyone to my talk, feel free to join the the event 🙌 Useful links: ⭐️ 📜
@JohnSnowLabs
JohnSnowLabs
3 months
Tweet media one
0
1
2
0
1
3
@nicolayr_
Nicolay Rusnachenko
5 months
@Mai_Mahmoud_ Congratulations, Phinally Done! 👏
0
0
2
@nicolayr_
Nicolay Rusnachenko
7 months
@cpt_harv @selina_mey @delsweil Well done, joining with the congratulations!
0
0
2
@nicolayr_
Nicolay Rusnachenko
1 year
It was always encouraging to see how high academia compete in scientific advances, while even more amazing to see that in sports! 🔥 💪 💪 💪 🚣‍♀️ 🚣‍♂️ 🛶 Yes, and nowadays it is still possible to see such traninings at Hammersmith quayside in London! 🔥
Tweet media one
0
0
3
@nicolayr_
Nicolay Rusnachenko
4 months
@SemEvalWorkshop our 🛠️ reforged 🛠️ version of the THoR which has: ✅1. Adapted prompts for Emotions Extraction ✅2. Implemented Reasoning-Revision 🧠 ✅3. GoogleColab launch notebook (details in further posts 🧵) ⭐️Github: 🧵 4/n #NAACL2024
1
1
3
@nicolayr_
Nicolay Rusnachenko
7 months
An interesting viewpoint on pre-training of the so-called language-centric LLMs in low-resource domain to "preserve" knowledge about rare languages #ecir2024 #llm #lowresourcedomain #pretraining
Tweet media one
0
0
3
@nicolayr_
Nicolay Rusnachenko
4 months
@yufanghou Thanks! By being not physically at #NAACL2024 this year, rather remotely at #SemEval , finding it 💎 a quick poster skimming from the Summary Content Units as well as semantical tree construction for the textual content 👀
0
0
3
@nicolayr_
Nicolay Rusnachenko
6 months
@RezaeiKeivan Congratulations with the paper acceptance! Well done 👏👀
0
0
1
@nicolayr_
Nicolay Rusnachenko
3 years
During last few years the importance of quick transoformer tunings for downstream tasks become even more demanded. Earlier announced awesome list of sentiment analysis papers has been enlarged with recent advances in more time-effective tuning techniques
Tweet media one
1
1
3
@nicolayr_
Nicolay Rusnachenko
7 months
One of the inspirative directions (Dimension) of narrative in long texts that were highlighted at #ecir2024 #Text2Story is Spacial 🌎🗺️ ... Further details on pipeline and timeline tracking ... 🧵[1/3]
Tweet media one
1
0
3
@nicolayr_
Nicolay Rusnachenko
7 months
@shaily99 Handy to go with Overleaf + DrawIO for concept diagrams. For the overleaf, forming "lastest.tex" 📝 which later become renamed to specific date, so that becomes a 📑 eventually, which could be gathered into "main.tex"
0
0
3
@nicolayr_
Nicolay Rusnachenko
6 months
0
0
3
@nicolayr_
Nicolay Rusnachenko
6 months
@Paul_Antara04 The concept is really good, so that making it mobile friendly seems to be a huge step forward ✨👏
0
0
2
@nicolayr_
Nicolay Rusnachenko
2 years
Managing attitudes of a single texts Is not the only option available in AREkit (). We are also consider a laaaa...aaarge scale collections of texts with relations, and a way of how such collections could be handled by AREkit. Stay tuned) #arekit #nlp
Tweet media one
1
2
3
@nicolayr_
Nicolay Rusnachenko
2 years
Thanks for the provided opportunity to Huizhi Liang and for all students who got an opportunity to attend on the first guest lecture at Newcasle University! 🎉🎉🎉 The presentation was devoted to advances in sentiment attitude extraction #newcastle #ml #nlp #arekit #lecture
Tweet media one
0
0
3
@nicolayr_
Nicolay Rusnachenko
7 months
Catch me to find out more on how ARElight may be applied for your large texts 📚/ 📰 @ #ecir2024 demo track 📜💻. Thanks to my @UniofNewcastle colleagues: Huizhi Liang, Maxim Kalameyets, @lshi_ncl for work on system✨📺 📍lobby poster / demo session #nlp #lm #ir #sampling
Tweet media one
0
0
1
@nicolayr_
Nicolay Rusnachenko
2 years
A short post which demonstrates pipeline organization for inferring sentiment attitudes from mass-media texts #deepPavlov #arekit #ml #nlp #sentimentanalysis
Tweet media one
0
1
3
@nicolayr_
Nicolay Rusnachenko
7 months
Thanks for the fun time at the dancefloor ✨💃🕺
@ecir2024
ECIR2024
7 months
Now the fun begins - it's ceilidh time!! 🕺🎉
0
2
21
0
0
3
@nicolayr_
Nicolay Rusnachenko
4 months
📢 I am happy to share that our studies made at @UniofNewcastle , aimed at fictional character personality extraction from literature novel books 📚, by solely rely on ⚠️ book content⚠️, become ACCEPTED at #LOD2024 @ Toscana, Italy 🇮🇹 🎉 👨‍💻: 🧵1/n
Tweet media one
1
0
3
@nicolayr_
Nicolay Rusnachenko
7 months
Self-attention -> windowed / sparse-self attention -> local +global self attention -> infini attention 👏✨
@arankomatsuzaki
Aran Komatsuzaki
7 months
Google presents Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention 1B model that was fine-tuned on up to 5K sequence length passkey instances solves the 1M length problem
Tweet media one
27
260
1K
0
0
3
@nicolayr_
Nicolay Rusnachenko
5 months
0
0
2
@nicolayr_
Nicolay Rusnachenko
3 months
📢 I believe such instruction tuned LMs may represent a valuable contributions to IR related task advances such as sentiment analysis 💎 📝
@AniketVashisht8
Aniket Vashishtha
3 months
Can we teach Transformers Causal Reasoning? We propose Axiomatic Framework, a new paradigm for training LMs. Our 67M-param model, trained from scratch on simple causal chains, outperforms billion-scale LLMs and rivals GPT-4 in inferring cause-effect relations over complex graphs
Tweet media one
15
132
700
0
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
@mudler_it @LocalAI_API Thanks for such a verbose explanation on the related differences! I believe I have to first find out more about function calling 👀
0
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
📢 This finds me as a valuable milestone 💎 on inputting personality traits into large language models 👀
@sylee_ai
Seongyun Lee
5 months
🚨 New LLM personalization/alignment paper 🚨 🤔 How can we obtain personalizable LLMs without explicitly re-training reward models/LLMs for each user? ✔ We introduce a new zero-shot alignment method to control LLM responses via the system message 🚀
Tweet media one
4
54
216
1
1
3
@nicolayr_
Nicolay Rusnachenko
5 months
0
0
2
@nicolayr_
Nicolay Rusnachenko
3 months
0
0
2
@nicolayr_
Nicolay Rusnachenko
1 month
0
0
2
@nicolayr_
Nicolay Rusnachenko
4 months
@NLPnorth @EliBassignana Is there related paper in open access that is behind the thesis studies? 👀✨
1
0
2
@nicolayr_
Nicolay Rusnachenko
4 months
💎 Fact checking domain and advances in it are important for performing IR from news, and mass-media. These advances may serve with the new potential approaches aimed at enhancing LLMs reasoning capabilities in author opinion mining / Sentiment Analysis ✨
@ManyaWadhwa1
Manya Wadhwa
4 months
Refine LLM responses to improve factuality with our new three-stage process: 🔎Detect errors 🧑‍🏫Critique in language ✏️Refine with those critiques DCR improves factuality refinement across model scales: Llama 2, Llama 3, GPT-4. w/ @lucy_xyzhao @jessyjli @gregd_nlp 🧵
Tweet media one
2
25
125
0
0
1
@nicolayr_
Nicolay Rusnachenko
5 months
0
0
1
@nicolayr_
Nicolay Rusnachenko
3 months
@windx0303 @IJCAIconf Cont get enough how creative that is, and non fitted "y" especially ✨ Well done, and I believe for the win as well!
0
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
📊Results of the gpt-4o (zero-shot) reasoning in Sentiment Analysis on English and non-English texts. Surprised to find low F1 results ⏬in English, while multilingual capabilities (ru) are still at the top levels 👑 #gpt4o #reasnoning #nlp #zsl #sentimetnalalysis #benchmark
Tweet media one
7
0
2
@nicolayr_
Nicolay Rusnachenko
3 months
0
0
2
@nicolayr_
Nicolay Rusnachenko
7 months
@cramraj8 @KaustubhDhole @eugeneAgichtein Congratulations, well deserved! 👏
0
0
2
@nicolayr_
Nicolay Rusnachenko
7 months
Once becoming aware about "Lost in the Middle" (LiM, see below) effect with 🤖, excited to find the related findings 👀 FYI:
@omarsar0
elvis
7 months
Long Context LLMs Struggle with Long In-Context Learning Finds that after evaluating 13 long-context LLMs on long in-context learning the LLMs perform relatively well under the token length of 20K. However, after the context window exceeds 20K, most LLMs except GPT-4 will dip
Tweet media one
8
76
342
1
0
2
@nicolayr_
Nicolay Rusnachenko
6 months
1
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
@JLopez_160 @cohere Thank you for sharing this, interesting to see how it goes with LLMs 👏👀 Legal documents are tend to be long in length, so how specifically you sample them?
1
0
0
@nicolayr_
Nicolay Rusnachenko
6 months
📢OpenKE is the information retrieval (IR) fine-tuned version of the Meta-Llama-3-8B-Instruct. Wonder on its capabilities in other IR prospects 🧪👀
@zxlzr
Ningyu Zhang@ZJU
6 months
Just released new information extraction models: lora weights trained on Meta-Llama-3-8B-Instruct with IEPile corpus, and a full-parameter fine-tuned information extraction model, OneKE, based on Chinese-Alpaca-2-13B for the community! 🚀 #OpenSource #ModelWeights #IEPile #NLP
Tweet media one
Tweet media two
4
5
34
0
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
0
0
0
@nicolayr_
Nicolay Rusnachenko
1 month
@_inesmontani The best preparation ever ✨
0
0
2
@nicolayr_
Nicolay Rusnachenko
7 months
@Giuli12P2 @l__ranaldi Despite not being present at EACL, happy to have randomly come across this post and studies 👏
0
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
1
0
1
@nicolayr_
Nicolay Rusnachenko
7 months
0
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
@pesarlin @mapo1 @Jimantha @quantombone @cvg_ethz Congratulations, well deserved! 👏🎓
0
0
1
@nicolayr_
Nicolay Rusnachenko
6 months
@ChenSun92 @Google @blaiseaguera Wow, congratulations! All the best on the researcher journey at Google 👏
0
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
0
0
2
@nicolayr_
Nicolay Rusnachenko
7 months
@xue_yihao65785 Not yet an expert in Multimodal NLP, but delighted for such short contributions sharing! 👀 Well deserved 👏
1
0
1
@nicolayr_
Nicolay Rusnachenko
4 months
@aequa_tech @SimonaFrenda 👏👀 are there key technical details 📃behind the Debunker Assistant that were mentioned in the kalk?
1
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
0
0
1
@nicolayr_
Nicolay Rusnachenko
3 months
@sucholutsky @NYUDataScience Congratulations, all the best in this role!👏
0
0
2
@nicolayr_
Nicolay Rusnachenko
7 months
Wonder to see the most recent capabilities of instruction-tuned transformers in this direction 📝📊 👀
@UKPLab
UKP Lab
7 months
📢 Calling all #NLProc enthusiasts! The Shared Task on Perspective #ArgumentRetrieval is open for registration. Join us in this exciting challenge & develop methods to tailor arguments to different audiences! 📭 (1/🧵) #argmining_2024 #ACL2024NLP
Tweet media one
3
4
14
0
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
0
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
@DrSanchariDas @makesomeshitup_ @NDSSSymposium @rhlchatterjee @USENIXSecurity @sigchi Joining with the sincere sorry for hearing this🙏 Congratulation of finalising the studies and all the best withe the quickest recovery.
0
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
@oanacamb Congratulations, that's impressive! 👏
0
0
1
@nicolayr_
Nicolay Rusnachenko
2 months
0
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
@alvarobartt @Alibaba_Qwen @huggingface Any recommendations on remote launching 72B for inferring?
1
0
1
@nicolayr_
Nicolay Rusnachenko
7 months
@leifos @ACM_CHIIR Great news, congratulations!
0
0
1
@nicolayr_
Nicolay Rusnachenko
4 months
@chenchenye_ccye Interesting, thank you for sharing! 👀👏
0
0
2
@nicolayr_
Nicolay Rusnachenko
6 months
@j_foerst @AmazonScience @clockwk7 @JonnyCoook Congratulations on this achievement! 👏🎉
0
0
2
@nicolayr_
Nicolay Rusnachenko
6 months
📢 Taking a look on to reasoning prospects of LLMs in linguistic tasks, it is necessary to go with data in English! Here is my findings on 📊 comparison in LLM reasoning between data written in: ☑ English ✅ Non-english (Russian) Reasoning capabilities by F1(PN) are tend to be
Tweet media one
1
0
1
@nicolayr_
Nicolay Rusnachenko
7 months
0
0
1
@nicolayr_
Nicolay Rusnachenko
6 months
When you're too confident with the large batch-size on LLM-finetuning, but the model generates long responses:
@darrenangle
darren
6 months
・ *゚   ・ ゚* ・。 *・。 *.。 。・ °*. RuntimeError: CUDA out of memory. 。。 ・ 。 ・゚ 。°*. 。*・。・ *゚   ・ ゚*
Tweet media one
35
411
6K
0
0
2