Crows have about 1.5B neurons. Assuming each neuron connects to ~7k other neurons on average, that's roughly equivalent to 10T parameters in an artificial neural network.
Is this a useful analogy?
Can we introduce the concept of "fake rigor"?
e.g. demanding gratuitous analyses/experiments, or arbitrary levels of modelling detail in situations that will make no material difference?
And while doing these, failing to recognise fundamental flaws in the overall question?
As a PhD student I was always nervous about smart peer reviewers finding problems in my research that I missed. As an advisor I now realise the stupid peer reviewers are infinitely more terrifying.
"Unfortunately this PhD funding is only open to UK students"
I have written this to around 20-30 brilliant young people around the world in the last two months. I'm sure they will do great things. It is just a shame they can't do them in this country.
To everyone doing a PhD: it's hard.
It's hard in ways that aren't obvious until you are deep into a question. You may not even know the question until you answer it. You'll understand the question better than anyone but you won't realise that either. This makes it a lonely path.
We locked ~100 neuroscientists in a building somewhere near the Potomac river and asked them what they thought about Representational Drift.
Here’s how they leaned on the Philosophical, Mechanistic and Practical levels:
As a grad student I was told ANNs were an unprincipled waste of time, and that serious ML research (e.g. that deployed in industry) used SVMs, Gaussian Processes etc.
The bitter lesson seems to be that we aren't smart enough to know what works, so we should research *broadly*.
It's mind-boggling that many AI researchers are still not quite aware of this Bitter Lesson. Unbelievably hard to overcome this mental attitude of sticking to small models/data and focusing on overly complex models/methods. Especially in academia...
Young & aspiring researchers & students everywhere: though it may feel intimidating, you never, never need to apologise for emailing another researcher out of the blue to discuss science or propose a research project.
Be reassured: these emails are gems amongst all the spam.
I'm excited to be taking on a new role as Reviewing Editor for J Neurosci, where I want to help build up the journal's representation of computational and theoretical neuroscience. Send us your best!
Turns out that publishing tons and tons of papers might actually be slowing down scientific progress... who knew?
Slowed canonical progress in large fields of science
Neuroscientist: we ignore the role of glia in neural circuit function
Theoretical neuroscientist: ok I've relabelled a bunch of units as "glia" - are you happy now?
What proportion of baseline synaptic fluctuations is attributable to noise vs systematic plasticity processes?
Surprisingly, to maintain memories the optimal proportion typically lets noise dominate!
For the 1000th time: HOMEOSTASIS DOES NOT IMPLY OR REQUIRE ANY SPECIFIC PHYSIOLOGICAL VARIABLE TO BE HELD CONSTANT.
Unless of course you need a handy straw man to make a paper or grant appealing.
Self-healing codes: How stable neural populations can track continually reconfiguring neural representations
We asked whether a continually evolving neural representation can be read out by circuits with more rigid tuning....
I'm bewildered. The argument here seems to be that if a statistical model of a phenomenon provides accurate predictions, then this is evidence that the phenomenon itself is nothing more than that same statistical model + the means used to fit it.
I seriously can’t remember reading a paper that excited me as much as this one did. It is such a distillation of our current moment and so beautifully done.
Synapses carry a lot of the burden of learning and memory, yet they are tiny bags of jelly subject to wild fluctuations in the molecules they are built from.
How can reliable decisions to potentiate/depress emerge in such a small, noisy system?
#tweeprint
Adding apparently redundant neurons and connections to a network can boost learning performance. This is because redundancy 'flattens out' the error landscape, making it easier to descend.
Dear Peer Reviewers
When did 'surprising' become a requisite for publishing a new and important finding?
Was the discovery of the Higgs Boson surprising? Or the crystal structure of the potassium channel? Or the latest data dump from a neuropixels probe?
Stick to your job.
to restore a bit of cosmic justice I'd like to share this beautiful paper from Shahaf & Marom > 20 years ago that wasn't even cited by a certain recent paper in Neuron that has news outlets in a tizz:
Why is daily life in New Zealand *not* headline news in UK?
If more people in the UK could see the possibilities we'd be able to achieve this in a couple of months.
Enforced lockdown for 6 weeks, vaccinations/tests targeting med/contact workers, full quarantine at border.
@AnnaKorhonen3
In New Zealand we have been virus free 8 months out of 10. We live a normal life with big events, no restrictions on groups etc. We wear face masks on planes and public transport in Auckland only as a preventative measure. It is surreal to live here and I consider myself lucky.❤️
Are neural representations of learned tasks and environments stable? Should we expect the to be? We review the evidence and implications of unstable, continually drifting representations.
Efficient learning is not only about synaptic plasticity mechanisms: circuit architecture matters.
The connectome of a neural circuit can boost learning performance, compensating for imperfections in plasticity mechanisms and/or noisy learning signals.
A lot of our research suffers from 60% ennui: you get about 60% to nailing a result then start doubting and telling yourself it is obvious, boring or otherwise worthless. It's a killer.
Aside from ruthless self discipline or self belief, how do you cope with this?
We have this idea that we shouldn't "salami slice" our findings into multiple papers.
I claim that this is largely misguided, and paradoxically stems from us valuing QUANTITY over quality, not the reverse.
I also think this *might* be changing for the better. Here me out: (1/n)
With some regret I have resigned as reviewing editor at eLife.
This wasn't done in protest, I just felt I couldn't continue with things as they are at the moment. In general I feel even more pessimistic and disempowered about scientific publishing. Frustrated and sad.
Maybe I'm naive but I don't understand the panic around this. If a tool writes some fluent, accurate text and an author reads, edits and oks it, who cares? Papers are about communication, they're not about inventing sentences that never existed before.
Optimal plasticity for memory maintenance during ongoing synaptic change
A beautiful analysis led by my postdoc, Dhruva Raman (who will be on the job market soon - DM me!)
@eLife
Representational Drift is unsurprising in the same way that synaptic plasticity is unsurprising: something must change when the brain learns. But this conclusion misses several important points:
I always force myself and my co-authors to try to write as clearly and accessibly as possible. I sometimes drive my students crazy.
There's a debate about whether this really helps or whether scientists prefer clunky, jargon-laden tosh. Unfortunately:
Have a PhD (or nearly)?
Want to come to Cambridge, UK to work on:
(1) control for neural circuits?
(2) spike-based computation?
(3) brain-machine interfaces?
(4) other :) ?
We have space, projects, ideas and can help with a proposal (deadline May 15):
I think perhaps the biggest point of confusion in neuroscience (all science?) is the use of "noise"
sometimes it is a placeholder for variables that we are don't account for, while other times it refers to a genuinely random process
both senses used interchangeably = confusion!
a couple of years ago I had a fuzzy suspicion that realtime feedback was missing from biological theories of learning - luckily some brilliant people have clarified that fuzz with what I think might be some of the most important research in decades:
Can someone please explain to me why editors at top journals can tell me that "in silico" work on neural circuits is not of broad interest while the rest of society is melting down over fears that superintelligent neural networks will take over the world?
dear peer reviewers everywhere,
I really do appreciate your time and effort, but if "this idea has been put forward before" is grounds for rejecting a paper then we might as well pack up the science and go to the beach because we'd only be allowed one paper for each topic
Many of us are waiting long times to have manuscripts reviewed, particularly the last few months.
Here are a few insights/observations from being an editor, reviewer and author - and still 'young' enough to remember what it feels like to bewildered by the peer review process
1/n
The joy of being truly interdisciplinary is that any time I get dismissed by neuroscientists as being "just an engineer" I can refer them to my engineering colleagues who will laugh and tell them I'm not an engineer. And vice versa.
Neuroscientists seem to frequently confuse:
"systems-level" vs "reductionist"
with
"large vs small"
systems-level thinking can be applied at the molecular, cellular, network, brain and social scales.
An insidious fallout of the Human Brain Project is that many (computational) neuroscientists now have this knee jerk heuristic that asking questions about physiology, ion channels etc is evidence of unprincipled "detailed" modelling, as though HBP embodies such questions.
Why have neuroscientists become salesmen? Time to get out of corporate hierarchy and self-promotion. Cries out for
#Slowscience
@ERC_Research
@UKRI_News
would do well to read the thoughtful and sobering review of
@In
-Silico-Film
Astrid Prinz and Eve Marder pointed out this basic fact 20 years ago in one of the simplest network architectures imaginable (and faced pointed derision and dismissal, including from her PhD advisor). Uncited. Will we ever learn?
What if we benchmarked AI/neuroscience model comparisons using artificial NNs as ground truth instead of Brains?
Presenting: "System Identification of neural systems: If we got it Right, would we know" at ICML this week.
Work led by Yena Han, Tommy
The static over eLife's publishing model shows us that we don't need to worry about Big Journals enforcing gate-keeping and a broken peer review system: academics will willingly do this to themselves.
@earth_andrew
so many of the responses make me sad - will we ever get (back?) to the point where people can follow their passions and curiosity because they are simply passionate and curious? our economics is slowly killing human creativity.
The more time that passes from finishing my thesis the prouder I am of what I achieved. It remains my best work, not because of citations and other BS, but because I completed it by transforming myself. I learned to trust my judgement & learned that ultimately that's all we have.
Apparently a paper that overturns Marr Albus theory and explains dense cerebellar activity is not enough of a conceptual advance for Neuron. I usually take editorial triage on the chin, but I have to wonder what *would* count as a big enough conceptual advance?
Claim: our research culture of 'productivity' (i.e. churning out papers) has the side effect of keeping straw men alive so that findings look like paradigm shifts. Net progress can be zero or even negative in the pursuit of healthy numbers. Can this be quantified? Tested? How?
I'm starting to wonder if I've been too dismissive of the Blue/Human Brain Project. I can begin to see important scientific use cases for this type of work and the infrastructure supporting it.
📢
#PreprintAlert
Our state-of-the-art models of rat somatosensory cortex and hippocampus CA1 are out! These atlas-based, biophysically detailed models exhibit in vitro and in vivo-like activity and replicate laboratory experiments.
How do neurons move cargo like receptors & channels around complex morphologies to maintain function?
Take a snapshot: you see cargo where you expect.
Measure transported cargo & you might be surprised - it tends to be in places where it isn't needed!
Why is neuroscience so prone to framing questions as having a hard yes/no answer?
And how many person-years of research would we rescue if we were to accept non-binary outcomes?
Is there a life lesson you wish you could teach your younger self - one that your younger self would almost certainly fail to accept or even comprehend?
Mine is: committing to something liberates you. Avoiding commitment imprisons you.
Neural activation in repeated, learned tasks drifts continually over days, which seems to be a problem for encoding stable behavior. We used data from
@lndriscoll
in mouse PPC and tried to resolve this... 1/n
Insightful thread. And it isn't just a rant about Big Tech or hype; it identifies the huge lingering problem in steering resources away from boom/bust cycles.
For the 1000th time: we need diverse research ecosystems with lots of differing, complementary approaches and people!!!
Every time El*n M*sk or Neur*link trends in neural tech, I get a sinking feeling in my stomach. It's difficult to explain all the reasons behind the sheer depth of the despair I feel, but I must try.
Neuroscience *is* pre-paradigmatic, and that's OK actually.
There's no better sign of a work in progress, making progress, than a litany of revised ideas.
1. Not much going on today, so here's a personal magnum opus on how neuroscience is pre-paradigmatic and its results are not even wrong. (link in image, also a 🧵)
Once again I'm completely lost and confused about my fellow neuro/cognitive scientists' views: why the collective dump on Chomsky et al's (very tight IMO) op ed? It was an op ed. In a newspaper. And it was pretty good considering the complexity of the topic. What am I missing?
@KordingLab
Because it is utterly incoherent. Example: I go to lengths to explain to students how "learning and memory" is distributed, multifaceted and impossible to reduce to a single mechanism. Then they go and read a paper claiming a molecule is "necessary and sufficient " for memory.
just had a long, emotional discussion with [unnamed but very senior and influential scientist recognised for their dedication to mentorship...] and we agreed that an unresolved barrier to true DEI is our inability to see past people's academic track record, especially students
Here is a problem with post-publication peer review that I can't find a fix for: the majority of papers won't get read, let alone "evaluated". In the current model there's a constraint that at least forces someone's eyes across a manuscript.
Is it a problem? Is there a fix?
I think the problem is at least twofold:
1. can't keep up with literature
2. can't simultaneously do quality research AND publish at a frenetic rate
So we can't filter the signal from the noise AND effort that could have been put into real science is spent generating noise!
On long, dark evenings when I'm struggling to make sense of the world - when all I can think about is the absurdity and cruelty of existence - the thing I surely need is a long form podcast featuring a 30 something tech bro who has figured it all out.
It's understandable to be cynical about peer review. But many of us take it seriously and try to help authors improve their work and reach their audience. On balance, my work has benefited from it, notwithstanding pain and frustration. I think that's true for most of us.
Undergraduates in my department think that research is funded my university.
This is a perfectly logical assumption, and those that I discussed this with were shocked that we have to seek external funding *and* that their tuition fees don't even cover the cost of tuition.
And for several years I hated my thesis work. I was embarrassed by the mistakes I made, by how little I knew and how little the field cared about it.
If I had met my future self and read what I'm writing now I would have baulked.
So I guess the hard way is the only way.
This is a lesson that never gets old. However, I reckon the fact that Kariko's work proved useful will be used as an argument for "focusing on translation and societal impact", not as an argument for the inherent unpredictability and eventual utility of true research.
Dear fellow researchers: if you are a grant reviewer or on hiring/selection committee and you discount or question work on the basis of multiple authorship then you are damaging science, not protecting it.
Instead you should be questioning CVs with >>10 pubs per year.
Before deciding how long a PhD should take, how many papers you need to, etc, we need to agree on the purpose of a PhD.
Much of the strife comes from a confusion about whether it is a qualification for a particular career, or a continuation of one's learning.
Anyway, one editor's quickly assembled triage letter won't take away my student's achievement, so here's the latest preprint. Editors feel fee to DM me if you do want to review it :)
Understanding how model parameters determine behaviour is a tough problem with no general solution. This paper throws deep learning at it, with some neat results.
Training deep neural density estimators to identify mechanistic models of neural dynamics
This keeps popping up in discussions/peer review.
Do we think 'the brain' is chaotic? Specifically, do we think that chaotic dynamics describe a substantial part of dynamics that govern cortical computations?
Chaos might be *possible*, but is it largely relevant?
do you find parts of the neuroscience community simultaneously stiff and sloppy?
✅ultra pedantic about whether some arcane "control" experiment is even necessary
✅superstitious about statistics and happy to make massive leaps of logic in interpreting data
My mentor, Eve Marder, spent much of her career using models to show just how little we learn by manipulating single parameters, as many experiments aim to. Even in 'simple' models this is almost impossible.
Yet many see this as a limitation of modelling!!!
It blows my mind.
@drmichaellevin
@adeelrazi
@TrackingActions
@NeuroCellPress
Ok, hand on heart, if you read an article stating "shopping mall door is sentient" and it turned out that the door in question was just a standard IR triggered door would you feel this was misleading?
@ItaiYanai
I think that the wrong interpretation. The paper finds evidence for a change in citation patterns, with it becoming less common to regard single studies as watersheds. This points to a change in scholarly practice, with more even credit given to prior work as much as anything.
The Marr-Albus theory of cerebellar function has stood the test of time. A key element is “codon theory”: sparse granule cell activation facilitates pattern separation. However, recent experimental work challenged this by finding *dense* activation during motor learning… 🧵
Sometimes it seems like anything can happen in in the brain. Amidst all the chaos, it's reassuring to remember that at least primary sensory neurons maintain stable responses to tangible things in the external world...
...except they don't:
I think
#neuralink
is cool. I rolled my eyes at the hype (there has to be some hype) and I shrugged at the hating from the neuro community. I was a little embarassed by the ensuing finger wagging and chest puffing. And now I'm embarassed as an engineer.
👇THIS is why the concept of sloppiness is important in biology. When we move from one scale to the next, sensitivities spread across many components. So papers with titles like "The role of molecule X in high level process Y" might not be that informative unfortunately :(
My brilliant postdoc, Dhuva Raman, has created a new toolbox in
#JuliaLang
for efficiently computing minimally disruptive curves in complex models: MinimallyDisruptiveCurves.jl:
What's a minimally disruptive curve? How can it give insight into a model?
I offered a masters project on BMI recently. After I outlined it to a few eager students, they asked about the decoding algorithm and were visibly bewildered and/or horrified that it didn't involve "AI" or neural networks.