Shahab Bakhtiari Profile Banner
Shahab Bakhtiari Profile
Shahab Bakhtiari

@ShahabBakht

3,946
Followers
1,081
Following
245
Media
4,700
Statuses

|| assistant prof @UMontreal || leading the systems neuroscience and AI lab (SNAIL) || @Mila_Quebec || #NeuroAI || vision and learning in brains and machines

Montreal, Canada
Joined April 2013
Don't wanna be here? Send us removal request.
Pinned Tweet
@ShahabBakht
Shahab Bakhtiari
1 year
Our lab 🐌now has a website and a logo: Check out our website to learn more about our approach and research goals. Big thanks to the great artist and neuroscientist @MariaZamfirPhD for helping with the logo.
Tweet media one
18
32
288
@ShahabBakht
Shahab Bakhtiari
2 years
🚨News🚨Absolutely thrilled to announce that, starting January 2023, I will be an Assistant Professor in the department of psychology at @UMontreal . I will be developing a NeuroAI research lab focused on visual perception and learning in artificial and biological neural networks
Tweet media one
79
29
500
@ShahabBakht
Shahab Bakhtiari
3 years
Check out our new paper: Can a single neural network explain specialized pathways of the visual system? TLDR; yes, but only if you train with a self-supervised predictive loss function! Here is a summary of our paper: (1/n)
4
87
337
@ShahabBakht
Shahab Bakhtiari
16 days
I’ll be recruiting 1-2 graduate students for Fall 2025 to work on visual learning generalization in humans and artificial neural networks. If you’re interested, apply! Check out our lab website () and reach out if you need more info. RT please.
@Mila_Quebec
Mila - Institut québécois d'IA
19 days
📷 Meet our student community! Interested in joining Mila? Our annual supervision request process for admission in the fall of 2025 is starting on October 15, 2024. More information here
0
3
28
5
95
266
@ShahabBakht
Shahab Bakhtiari
3 years
If 16,000 A100 GPUs is the answer, what is the question?
@AIatMeta
AI at Meta
3 years
Meta is announcing the AI Research SuperCluster (RSC), our latest AI supercomputer 💻 for AI research. RSC will allow our researchers to do new, groundbreaking experiments in #AI . Learn more about RSC and the important role it will play:
43
223
919
17
4
215
@ShahabBakht
Shahab Bakhtiari
4 years
Does Tweeting Improve Citations? A randomized trial tl;dr: yes it does.
2
62
201
@ShahabBakht
Shahab Bakhtiari
2 months
Neuroscientists should closely follow the mechanistic interpretability efforts in AI and the debates around it, if they’re not already. Eg see how much the example below (+ the whole thread) is relevant to causal manipulation experiments in the brain.
@amuuueller
Aaron Mueller
3 months
Second big issue: if there are multiple causes of the same effect, it would be very easy to miss both of them if we only ablate one at a time. A related concept is preemption: one cause prevents another from having any effect, and so we miss the preempted cause.
Tweet media one
1
2
27
4
26
190
@ShahabBakht
Shahab Bakhtiari
5 years
No, this isn’t from @tyrell_turing et al recent perspective on @NatureNeuro . This is David Robinson trying to make a similar point in 1992:
Tweet media one
8
31
144
@ShahabBakht
Shahab Bakhtiari
4 years
“Here, we demonstrate that predictive coding converges asymptotically (and in practice rapidly) to exact backprop gradients on arbitrary computation graphs using only local learning rule”
2
28
132
@ShahabBakht
Shahab Bakhtiari
3 years
Neuro-AI folks, how do you keep up with both neuro and AI literature? What’s your strategy? It’s a full-time job!!! (And yes, giving up is always an option)
19
11
131
@ShahabBakht
Shahab Bakhtiari
2 years
I very much agree with @markdhumphries that evolution doesn't care what we call a brain area. But, what evolution might care about is modularity as an efficient way of expanding a constantly adapting system. A 🧵:
@markdhumphries
Mark Humphries
2 years
Ah, I see the brain localisation wars have started up again Just going to lob this in and run back to my dugout:
13
56
343
6
22
123
@ShahabBakht
Shahab Bakhtiari
5 years
By far, one of the best analysis I’ve ever read on academia as a business model, the role of postdocs as its horsepower, and how twitter can break down the whole system for good. Can Twitter Save Science? - Alex Danco's Newsletter
3
41
113
@ShahabBakht
Shahab Bakhtiari
3 years
Representational drift in the mouse visual cortex; by Daniel Deitch, Alon Rubin, Yaniv Ziv The intrinsic structure of movies representation doesn't change over time despite drift in the coding of the neurons supporting it.
Tweet media one
0
23
107
@ShahabBakht
Shahab Bakhtiari
4 years
Octopus arms show photostatic response to light, and most interestingly, the motor response is channeled through the central nervous system (not local reflexes). So it’s kinda like having an 👁 in your palm, and moving your arm based on what your hand “sees”.
Tweet media one
3
11
99
@ShahabBakht
Shahab Bakhtiari
4 years
It’s hard to fully understand the importance of her work unless you’re doing science in a developing country where universities and research institutes can’t afford outrageous subscription fees.
@riotscienceclub
RIOT Science Club
4 years
We're still hyped about last week's RIOTS, where we had Alexandra Elbakyan ( @ringo_ring ) share her thoughts on open knowledge, copyright and #SciHub . If you've missed it (or want to rewatch it): 📺 Huge thanks again to @mariiabocharova for translating
5
146
516
0
13
95
@ShahabBakht
Shahab Bakhtiari
26 days
Not all visual features are treated equally in brains or artificial neural networks; some are favored by more neurons. What are the behavioural and learning consequences of these biased representations? I discuss this question in a new blog post: (1/4)
3
12
87
@ShahabBakht
Shahab Bakhtiari
4 years
Is a ‘beach animal’ a computer?
9
8
83
@ShahabBakht
Shahab Bakhtiari
2 years
It’s worth re-reading Olshausen and Field’s 2006 “What is the other 85% of V1 doing?” to see how far we’ve come since then: 1/n
2
11
80
@ShahabBakht
Shahab Bakhtiari
10 months
This work from @FieteGroup looks pretty cool: Showing how structure in the sensory cortex (hierarchy, topography, etc.) can emerge from self-organizing dynamics and spontaneous activity on a cortical sheet. Looking forward to reading this one.
1
4
80
@ShahabBakht
Shahab Bakhtiari
8 months
“our results suggest that the ventral visual cortex, like DNN models, provides a texture-like basis set of features, and that further neural computations, perhaps downstream of IT, are necessary to account for the shape selectivity of visual perception.”
2
17
73
@ShahabBakht
Shahab Bakhtiari
4 years
I'm learning a lot from this book - by Striedter & Northcutt.
Tweet media one
Tweet media two
5
9
73
@ShahabBakht
Shahab Bakhtiari
2 years
Prestige seeking behaviors are built into the fabric of academia (and its evaluation systems). Pre-pub peer review & journals have evolved to serve this need in academics. Demolishing it will leave an empty space that will be filled by alternatives that are not necessarily better
8
10
68
@ShahabBakht
Shahab Bakhtiari
1 year
Join our NeuroAI workshop in Montreal 🧠🤖 Check out the list of speakers, register, and submit an abstract.
@g_lajoie_
Guillaume Lajoie
1 year
🚨Announcing a Workshop on advances in NeuroAI🚨: Oct 10-13, held at @Mila_Quebec , Montréal. Abstract submission is open, travel grants for trainees available, speaker list and more 👉 IN-BIC, @ai_unique , @CIFAR_News , @pimsmath , @IVADO_Qc
2
60
142
0
13
68
@ShahabBakht
Shahab Bakhtiari
3 years
Watching #AlphaFold2 unfolding, it’s hard to resist the question of what’d be the neuroscience problem that could be pushed toward solving by throwing engineering and compute resources at it. Some might argue that neuro isn’t even at the stage of having such well defined question
6
6
64
@ShahabBakht
Shahab Bakhtiari
1 year
Had heard it from others, but now it’s firsthand experience. The joy of hanging out with these young and fun scientists is the best part of running a lab. The first SNAIL social event 🐌
Tweet media one
Tweet media two
Tweet media three
1
2
62
@ShahabBakht
Shahab Bakhtiari
2 years
"Dissociation in neuronal encoding of object versus surface motion in the primate brain" Showing long-range motion direction selectivity in V4 (up to 29% of neurons, using neuropixel recordings)
0
13
60
@ShahabBakht
Shahab Bakhtiari
4 years
A figure in a recent paper by @talia_konkle and Alvarez () reopened an old question for me. The figure shows that an AlexNet architecture with group normalization has more similar representations to the brain than one with batch normalization.
Tweet media one
2
15
57
@ShahabBakht
Shahab Bakhtiari
3 years
Science has definitely become more inclusive with the virtual conferences. Someone who hasn't dealt with visa offices can't see the true point of virtual conferences.
@neurobongo
David Cox
3 years
When live conferences do come back, I do think we should keep virtual participation as an option, to allow broader participation. But I feel like all-virtual conferences are slowly strangling academia.
4
0
27
1
9
53
@ShahabBakht
Shahab Bakhtiari
1 year
Make sure you submit a request if you’re interested in joining the NeuroAI community in Montreal
@Mila_Quebec
Mila - Institut québécois d'IA
1 year
Interested in joining Mila's research community? Our annual supervision request process for new Mila students is starting this Sunday October 15, 2023. For more information ➡️
Tweet media one
2
46
149
0
6
55
@ShahabBakht
Shahab Bakhtiari
9 months
New paper @UMontreal @Mila_Quebec (In collab with Chris Pack's lab @TheNeuro_MNI @mcgillu ) We studied Visual Perceptual Learning (VPL) in distinguishing stimuli that have "asymmetric" representations in the visual cortex. How does this asymmetry affect VPL? 🧵(1/10)
@ARVOJOV
Journal of Vision
9 months
Shahab Bakhtiari @ShahabBakht et al. @UMontreal find that asymmetric stimulus representations bias visual perceptual learning.
Tweet media one
0
1
14
2
5
53
@ShahabBakht
Shahab Bakhtiari
3 years
mini two-photon microscope for fast, high-res, multiplane ca imaging of over 1,000 neurons at a time in freely moving mice: "A novel technique for successive imaging across multiple, adjacent FOVs enabled recordings from more than 10,000 neurons ..." 🤯
Tweet media one
Tweet media two
1
12
54
@ShahabBakht
Shahab Bakhtiari
3 years
Interesting, new paper from Cichy lab "The spatiotemporal neural dynamics of object location representations in the human brain" by @MonikaGraumann et al. Object loc emerges in the ventral areas ~ 150 ms later on a high clutter background.
Tweet media one
Tweet media two
Tweet media three
1
7
50
@ShahabBakht
Shahab Bakhtiari
2 years
The next generation of big neuro projects could look something like GPT-JT-6B. A community based, distributed, large-scale model, optimized to align with all the neural/behavioral data that we could put together. What % of the neuro community would get on board with that?
@DrJimFan
Jim Fan
2 years
This enables geo-distributed computing across cities or even countries. Now everyone can BYOC (“Bring Your Own Compute”) and join the training fleet to contribute to the open-source development. The scheduling algorithm makes no assumption about the device types. 3/
4
9
73
7
3
51
@ShahabBakht
Shahab Bakhtiari
2 years
Plenty of gems for new PIs in this note by @behrenstimb : Especially liked this one: “If you ever become Buzsaki, Deisseroth, or Dolan you can have people that work for you.  Until then, you work for them.”
1
8
50
@ShahabBakht
Shahab Bakhtiari
1 year
Every time a neuroscientist hints on a podcast that deep learning in neuroscience is merely engineering and regression and not science, one GPU dies.
1
1
49
@ShahabBakht
Shahab Bakhtiari
7 months
Another cool work on texture / shape bias but in large vision-language models: “we find that VLMs are often more shape-biased than their vision encoders, indicating that visual biases are modulated to some extent through text in multimodal models”
@ShahabBakht
Shahab Bakhtiari
8 months
“our results suggest that the ventral visual cortex, like DNN models, provides a texture-like basis set of features, and that further neural computations, perhaps downstream of IT, are necessary to account for the shape selectivity of visual perception.”
2
17
73
2
9
46
@ShahabBakht
Shahab Bakhtiari
3 years
This is very exciting 🎉🥂🥳
@patrickmineault
Patrick Mineault
3 years
Super excited that both @ShahabBakht and I's papers were accepted in NeurIPS as spotlights! Hope you're ready for a whirlwind tour of how the brain solves motion! Tweeprint here 👇
2
5
42
4
1
45
@ShahabBakht
Shahab Bakhtiari
4 years
This is so annoying 😤
@jagarikin
じゃがりきん
4 years
ゴールする瞬間なにかが起きる…!
109
3K
11K
1
10
46
@ShahabBakht
Shahab Bakhtiari
2 months
It’s quite possible that mouse and monkey V1s operate at different processing stages, as suggested in this super interesting review paper by Jon Kaas et al.:
@martin_hebart
Martin Hebart
2 months
Another study highlighting that visual findings from mice don't generalize to primates. 2 thoughts: 1) It's fascinating that mice have no orientation columns - architectural constraints? 2) It's crazy nobody has directly shown orientation columns in humans
3
25
126
0
6
45
@ShahabBakht
Shahab Bakhtiari
2 years
I’ll be talking about Self-Supervised Learning and how it can be used for modeling and studying the development of animals’ visual systems: what have been done and what should we do next. Join us if you’re interested.
@ResearcherApp
Researcher
2 years
Join us for a #ResearcherLive session on "Self-supervised learning: A new lens on animal visual development" with @ShahabBakht , @UMontreal @Mila_Quebec 🗓️ Tue, 18 April 2023 🕓 5 pm BST / 4 pm GMT Register now! 👇
Tweet media one
0
0
11
1
3
43
@ShahabBakht
Shahab Bakhtiari
2 years
I'm very happy to have the opportunity of building my lab in Montreal, one of the best and fastest growing NeuroAI ecosystems in the world. I'll be recruiting trainees at all levels. If you're interested in working at the intersection of neuroscience and AI, drop me a message.
3
5
42
@ShahabBakht
Shahab Bakhtiari
2 years
Lastly, to all Iranian students that are affected by the recent events: if you're considering to apply for graduate programs, please let me know if I can be of any help.
2
2
42
@ShahabBakht
Shahab Bakhtiari
4 years
An interesting read ...
Tweet media one
2
5
41
@ShahabBakht
Shahab Bakhtiari
3 years
Eve Marder at @neuromatch 4.0 panel discussion: "If every single postdoc is convinced that he or she should have bona fides in machine learning, what does it tell us about herds?"
7
3
41
@ShahabBakht
Shahab Bakhtiari
3 years
Next time you delete your large model’s checkpoints, beware of the moral ambiguities of your action
@ilyasut
Ilya Sutskever
3 years
it may be that today's large neural networks are slightly conscious
450
562
3K
1
4
39
@ShahabBakht
Shahab Bakhtiari
2 years
I’ve migrated to Neuromatch’s mastodon instance. A great community of neuro and NeuroAI people already there. Join us if you haven’t already. (And for those who wonder if instances matter: yes - local timeline is where you see the difference)
0
9
40
@ShahabBakht
Shahab Bakhtiari
2 years
With prompts in Farsi, #stablediffusion generates a lot of beautiful Islamic architectures and Persian carpets, which have nothing to do with the content of the prompts.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
2
0
41
@ShahabBakht
Shahab Bakhtiari
3 years
PaLM explaining jokes to us, DALL-E drawing our dreams, and now learning that transformers have been like the hippocampus all along … what a week!
@jmhessel
Jack Hessel
3 years
"An oil pastel painting of a skeptical researcher absolutely amazed by what he's seeing on a computer" (AI-generated image created by DALL-E with a prompt I wrote, top 1 sample)
Tweet media one
7
28
374
2
4
39
@ShahabBakht
Shahab Bakhtiari
4 years
@hardmaru This old video of Gibson’s experiment is relevant. Cliff avoidance existed even ~ 6 months old babies:
1
0
40
@ShahabBakht
Shahab Bakhtiari
3 years
Deep learning studies are getting closer and closer to monkey ephys: spend ~ 6-12 months train the first one, look at the data, if found anything interesting, go for the second one.
Tweet media one
@OriolVinyalsML
Oriol Vinyals
3 years
The Deep Learning Devil is in the Details. I love this work from @IrwanBello and collaborators in which they show how training "tricks" improve ~3% absolute accuracy on ImageNet, progress equivalent to years of developments and research! Paper:
Tweet media one
16
227
1K
2
3
38
@ShahabBakht
Shahab Bakhtiari
3 years
Number of Vision Science Society (VSS) abstracts with 'neural networks' in title since 2001 Looks like a paradigm shift ... And no sign of saturation! Something happened in 2016 - maybe Tensorflow?! Or lagged response to @dyamins et al (2014)?
Tweet media one
2
7
38
@ShahabBakht
Shahab Bakhtiari
4 years
Natural scene representation in mouse V1 requires postnatal visual experience
Tweet media one
Tweet media two
0
3
31
@ShahabBakht
Shahab Bakhtiari
4 months
Interpreting any failure of AI models as a failure of their "understanding" has proven to be a mistake. Soon, a model will generate accurate gymnastics videos, and we will have to choose between accepting the AI's understanding of human body motion or moving the goalposts.
@autismsupsoc
🌟Cheshire Cat ᓚᘏᗢ,
4 months
CW: Body Horror? This AI video attempt to show gymnastics is one of the best examples I have seen that AI doesn’t actually understand the human body and it’s motion but is just regurgitating available data. (Which appears to be minimal for gymnastics)
447
1K
6K
7
3
34
@ShahabBakht
Shahab Bakhtiari
3 years
@fchollet The hard part is to get your arXiv preprint seen by as many ppl as possible. That’ll have a huge effect on the “adoption” part. One could put brilliant work on arXiv but might not survive the overly skewed attention competition out there, especially given corporates’ PR machinery
1
0
35
@ShahabBakht
Shahab Bakhtiari
8 months
Can’t ignore potential links with low-rank communication subspaces observed in the brain, eg
@BlackHC
Andreas Kirsch 🇺🇦
8 months
2/ GaLore takes a novel approach compared to methods like LoRA and ReLoRA. Instead of doing low-rank projection in weight space (W = W_0 + A @B ) and baking this into the weights every T steps, GaLore performs low-rank projection in gradient space: G = P @ G' @ Q^T.
Tweet media one
Tweet media two
1
0
24
2
6
34
@ShahabBakht
Shahab Bakhtiari
3 months
Even if you’re NOT a big fan of my research, but have a guess what my next paper would be, I’m very eager to listen.
@KordingLab
Kording Lab 🦖
4 months
Dear (prof first name), I have long been a fan of your research. Here is what your logically next paper would be (without saying it) and I would love to work on it and am ideally prepared for it. Can we talk? Yours, (Your first name)
6
10
104
0
1
33
@ShahabBakht
Shahab Bakhtiari
2 years
I wonder if people with “better” passports are aware of the “passport” effect. Do they consider it in their evaluations? It’s a real thing and it can hugely affect one’s career.
@EmtiyazKhan
Emtiyaz Khan
2 years
If I had known earlier, I could have done this before. They didn't direct me to the appointment page until I finished the documents. Emailed the embassy and got a usual response. F*** this s***. I don't miss physical conferences. They are for people with better passports.
7
7
90
1
7
34
@ShahabBakht
Shahab Bakhtiari
3 years
Google’s new Pathways project is a great example of what corporate labs are good at: scaling up a simple, promising idea and pushing it to its limits. Small academic labs should be good at developing those simple ideas and demonstrating their potentials.
2
1
34
@ShahabBakht
Shahab Bakhtiari
5 years
David Sussillo mentioned this old paper by Zipser and Anderson in his talk @neuroAIworkshop at #NeurIPS : It’s perhaps one of the first examples of comparing representations in the brain and artificial neural nets.
Tweet media one
2
5
34
@ShahabBakht
Shahab Bakhtiari
4 years
This result by SimCLRv2 (a self-supervised model) is impressive. With only 1% of ImageNet labels, SimCLR2 is as good as the fully supervised ResNet-50.
Tweet media one
1
5
34
@ShahabBakht
Shahab Bakhtiari
1 year
I love it every time @AnthropicAI mentions the link between their exciting work and prior work in neuroscience. It's not something you see frequently in AI these days
@AnthropicAI
Anthropic
1 year
Our research builds on work on sparse coding and distributed representations in neuroscience, disentanglement and dictionary learning in machine learning, and compressed sensing in mathematics.
1
6
293
0
1
34
@ShahabBakht
Shahab Bakhtiari
3 years
Our NeurIPS 2021 papers now have their own websites. Check them out for the revised papers, codes, and some visualizations.
@tyrell_turing
Blake Richards
3 years
Our two spotlight papers (by @ShahabBakht and @patrickmineault ) at #NeurIPS2021 showing how ANNs can develop dorsal stream representations now have their websites up (with links to the updated papers and code):
0
27
105
0
4
34
@ShahabBakht
Shahab Bakhtiari
3 years
Enjoyed reading this paper by Yaoda Xu and Maryam Vaziri-Pashkam: Careful analysis of representational similarities between 6 areas of human visual cortex and 14 different ImageNet-trained ANNs. Early areas can be ‘fully’ explained, but not higher areas
0
6
33
@ShahabBakht
Shahab Bakhtiari
4 years
Yeah... couldn’t be better!
Tweet media one
0
0
33
@ShahabBakht
Shahab Bakhtiari
10 months
On the emergence of specialized streams in neural networks: 1/ specialization can emerge without task-specific optimization. Self-supervised learning, as shown in the recent work by @dfinz et al and our previous work, can lead to emerged specialized streams.
1
6
33
@ShahabBakht
Shahab Bakhtiari
2 years
I know that people who follow me here are not doing that for politics. But this is not politics! Last time this happened, thousands were arrested and hundreds - if not more - were killed on the streets of Iran - A preparation step for another bloodbath. #مهسا_امینی
@netblocks
NetBlocks
2 years
⚠️ Confirmed: Real-time network data show a nation-scale loss of connectivity on MCI (First Mobile), #Iran 's leading mobile operator, and Rightel; the incidents come amid widespread protests over the death of #MahsaAmini 📵 📰 Background:
Tweet media one
231
2K
4K
0
13
33
@ShahabBakht
Shahab Bakhtiari
2 years
Looking forward to reading this paper paper
Tweet media one
@biorxiv_neursci
bioRxiv Neuroscience
2 years
BCI learning phenomena can be explained by gradient-based optimization #biorxiv_neursci
0
3
22
1
2
31
@ShahabBakht
Shahab Bakhtiari
2 years
These findings from @TimKietzmann lab are among the most exciting I’ve seen recently. Especially, if you’re into predictive coding, make sure you read this paper. I also wrote a preview on why I believe this is a very interesting and important paper:
@TimKietzmann
Tim Kietzmann
2 years
Excited that our paper "Predictive coding is a consequence of energy efficiency in recurrent neural networks" is now out in @Patterns_CP ! Much has changed since preprint, so check it out: Work with the fantastic @hellothere_ali , @nasiryahm , and @marcelge !
3
47
154
2
3
31
@ShahabBakht
Shahab Bakhtiari
4 years
Happy to see our paper finally out. Perceptual improvements caused by Visual Perceptual Learning are highly specific to the training stimulus. In this paper, we showed that the specificity of VPL depends on the complexity of the training stimulus.
1
2
29
@ShahabBakht
Shahab Bakhtiari
3 months
This rule also applies spatially not just temporarily. If you’re a researcher in tech (or any field) and you’re not reading anything five metres away from you (metaphorically speaking), your own work will be forgotten five metres from you.
@docmilanfar
Peyman Milanfar
3 months
If you’re a researcher in tech and you’re not reading anything more than five years old, you can be sure your own work will be forgotten five years from now.
18
52
569
0
3
30
@ShahabBakht
Shahab Bakhtiari
4 years
Organization of LGN in primates, tree shrews, squirrels, and cats. Can I have this for the whole visual system please?! Thanks.
Tweet media one
1
5
30
@ShahabBakht
Shahab Bakhtiari
4 years
@neuroecology @DynamicEcology “Neural mechanisms for interacting with a world full of action choices” by Cisek and Kalaska For me, it broke down the perception/action dichotomy by using a neuronal interpretation of Gibsonian affordances.
2
3
30
@ShahabBakht
Shahab Bakhtiari
4 years
Finally ... humans playing Atari during fMRI "Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations in high-dimensional environments"
1
3
29
@ShahabBakht
Shahab Bakhtiari
4 years
I’m old enough to remember invitation-only Gmail. I’m not impressed by your Clubhouse shenanigans kiddo!
1
1
29
@ShahabBakht
Shahab Bakhtiari
2 years
Day 108 of PI life: This pile of unread papers is gonna crash down and kill me one day (literally and metaphorically). #AcademicChatter
4
0
28
@ShahabBakht
Shahab Bakhtiari
5 years
So, if we regularize an ANN to have similar representations as those of mouse visual cortex, the ANN becomes more robust to adversarial attacks. Here is the most interesting part of the paper for me:
Tweet media one
@arxiv_org
arxiv
5 years
Learning From Brains How to Regularize Machines.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
4
11
0
6
29
@ShahabBakht
Shahab Bakhtiari
2 years
Behavior shapes retinal motion statistics during natural locomotion Another great work from Hayhoe lab! Describing statistical properties of the retinal motion patterns during walking, and how they're affected by gaze location and behavior.
Tweet media one
Tweet media two
0
9
28
@ShahabBakht
Shahab Bakhtiari
3 years
@neuroecology At least twice according to Striedter & Northcutt
Tweet media one
Tweet media two
0
1
29
@ShahabBakht
Shahab Bakhtiari
2 years
Iranian girls and their revolution on Vienna’s streets #MahsaAmini #مهسا_امینی
Tweet media one
Tweet media two
Tweet media three
0
5
26
@ShahabBakht
Shahab Bakhtiari
6 months
Me: “what a cool model … we should give it a try in the lab” Model:
Tweet media one
3
0
28
@ShahabBakht
Shahab Bakhtiari
3 months
Experiencing major FOMO this year. Will be watching the stream. Enjoy it down there, y’all.
@CogCompNeuro
CogCompNeuro
3 months
For folks who aren't able to attend in-person, we are excited to be able to stream all talks this year on our youtube channel: Papers can be found here: Please share!
0
140
397
0
1
28
@ShahabBakht
Shahab Bakhtiari
4 months
Anyone wondering how ANNs and deep learning have advanced our understanding of the brain should read this paper. I was fortunate to see these results at various stages of their development, and they only got better and better.
@AdrienDoerig
Adrien Doerig
4 months
1/13 Heavily updated preprint! We show that the contextual information encoded in Large Language Models (LLMs) is beneficial for modelling the complex visual information extracted by the brain from natural scenes. 🧵
2
50
164
1
3
28
@ShahabBakht
Shahab Bakhtiari
2 years
Join us in this exciting talk series. I’ll delve into the recent advances in self-supervised learning in AI and their applications in modeling the visual system.
@ResearcherApp
Researcher
2 years
Join us in our upcoming event series, #AI in #Neuroscience , where our experts will explore and share the latest insights. @achterbrain @crntozlu @ShahabBakht Register now! 👇
0
1
6
0
3
27
@ShahabBakht
Shahab Bakhtiari
3 years
GACs have become my favourite way of getting an updated view of a comp neuro/cog field. Whoever came up with the idea deserves the Nobel peace prize. This one from #CCN2021 was great:
0
2
27
@ShahabBakht
Shahab Bakhtiari
3 years
As much as I was shocked by the ignorance of the original consciousness tweet, I'm way more shocked by how it's turned my timeline upside down. This level of influence and reach can definitely be put to a better use.
3
0
27
@ShahabBakht
Shahab Bakhtiari
10 months
I believe it’s because neuroscience lacks an easy-to-use metric to measure transformative-ness. There is no list of major open questions that everyone agrees on. We’re divided into small subfields (for better or worse), busy with our own domain-specific questions. (1/2)
@erikphoel
Erik Hoel
10 months
The lack of replies to this reinforces my own belief: neuroscience has come up with many cool results in the past few decades, but none that are truly transformative
5
4
33
2
3
26
@ShahabBakht
Shahab Bakhtiari
3 years
There are so many interesting observations in this paper: The most interesting one imo: 1- "[CLIP] scores close to humans across all of our metrics presented most strikingly in terms of error consistency"
4
5
27
@ShahabBakht
Shahab Bakhtiari
4 years
Generalization in data-driven models of primary visual cortex "We demonstrate that a representation pre-trained on thousands of neurons from various animals and scans generalizes to neurons from a previously unseen animal" #ICLR2021_submission
Tweet media one
0
3
26
@ShahabBakht
Shahab Bakhtiari
4 years
A great demonstration of all the driving skills I had to unlearn before getting my Canadian driving license.
@ali_noorani_teh
Ali Noorani
4 years
Let’s say driving in Tehran is a bit difficult
95
647
2K
1
1
26
@ShahabBakht
Shahab Bakhtiari
4 years
Half a century ago, neurons firing in real-time
@elonmusk
Elon Musk
4 years
@Neuro_Skeptic Will show neurons firing in real-time on August 28th. The matrix in the matrix.
179
367
4K
2
9
24
@ShahabBakht
Shahab Bakhtiari
1 year
Beautiful work presenting a generative model of brain connectome! Easy to imagine all the interesting things that can be done with this approach to bring evolution and development into NeuroAI.
@DanAkarca
Danyal
1 year
So happy to share! 🎉   Recently, we’ve seen 🔥 work using generative models to elucidate candidate principles of neural connectivity.   Our v2: Inspired by redundancy reduction, our new model can generate both the topology & weights of the connectome:
2
34
114
1
5
26
@ShahabBakht
Shahab Bakhtiari
2 years
I’m amazed by how easily you can break LLMs by including some simple relational queries or info in the prompt. No matter how good they’re in generating high quality texts or images, they easily fail in dealing with relations. And that, imho, says a lot about their shortcomings
2
1
26
@ShahabBakht
Shahab Bakhtiari
4 years
Imo, one of the most influential works in computational (vision) neuroscience. It was my first encounter with an optimization-based approach to visual coding which was a tempting alternative to the more common tuning-curve-ology approach
@gabrielpeyre
Gabriel Peyré
4 years
Oldies but goldies: B. Olshausen, D. Field, Emergence of simple-cell receptive field properties by learning a sparse code for natural images, 1996. Showed that dictionary learning on natural image produces wavelet-like atoms.
Tweet media one
1
42
236
1
1
25
@ShahabBakht
Shahab Bakhtiari
4 months
I haven't seen any satire account being taken so seriously so often. Not sure what that says about their jokes or the subjects of their jokes.
5
0
25
@ShahabBakht
Shahab Bakhtiari
2 years
How AI transformers mimic parts of the brain
0
6
25
@ShahabBakht
Shahab Bakhtiari
3 years
This dataset is 🤯 From the preprint () We all need to actively think what can be done with this dataset.
Tweet media one
Tweet media two
@AllenInstitute
Allen Institute
3 years
Announcing the most detailed examination of mammalian brain circuitry to date. This data release marries a 3D wiring diagram with the function of tens of thousands of neurons. @bcmhouston @PrincetonNeuro @IARPAnews @awscloud @googlecloud 🧵 👇
Tweet media one
3
252
677
1
5
25
@ShahabBakht
Shahab Bakhtiari
3 years
A very useful study, with many practical insights about self-supervised learning (SimCLR here) Trying to answer important questions about e.g. the effect of dataset size, domain etc on SSL
3
5
25
@ShahabBakht
Shahab Bakhtiari
2 years
Scaling is important, but I’ll refer everyone who believes scaling is _all_ you need to Chen’s beautiful work, as an example of how far we can go with genuine ideas, without scaling.
@ChenSun92
Chen Sun 🧠🤖🇨🇦
2 years
(1/n) Hello Friends! I wanted to share my paper with a #tweeprint . We made an RL module we call contrastive introspection (ConSpec). It enables learning when rewards are sparse and multiple key steps are required for success, as often occurs in real life
10
45
179
1
5
25
@ShahabBakht
Shahab Bakhtiari
2 years
@NicoleCRust The optimist inside me would argue that building brains will lead to understanding brains (à la Feynman) and eventually fixing them, so the current fast rate of progress in building brains is actually part of the progress toward fixing them.
0
0
23