Somebody let me teach a full college class this summer at UCSD lol:
20 lectures on Neural Signal Processing. All course material here:
The labs should be especially useful - they are designed to build incrementally towards DSP concepts in neuroscience.
one of my postdoc fellowship apps got rejected and I honestly don't know whether to laugh or cry at this tragicomedy
like, thank you??? 🥲
I do hope to meet this reviewer one day and convince them that I'm actually just extremely talented at sneaking into great labs
Interrupting my stream of basketball tweets to bring you our latest paper, out in
@NatureHumBehav
today...
...in which some of my closest friends in grad school and I managed to publish a paper inspired by our collective identity crisis.
Thread for all you CogSci nerds here:
Literally all of my notes from Cosyne19 - a lot of manifolds (neural and behavioral), but also a lot of embodiment, coding, and philosophy.
Warning: it's really long, so if you want to go through the whole conference as me (but without all the natas) ...
I bumped into a student from my summer class and he said he was applying to grad schools. I asked him what field he's thinking about and he said:
"computational neuroscience because I want to do what you do"
and I'm just letting the internet know that I didn't cry
My
#AI4Neuro
magnum opus:
Discovery of spiking network model parameters constrained by neural recordings, using simulation-based inference & generative “AI”.
(aka the answer to “how the f did you end up in Tübingen?”)
Here's what we have in store:
I'm probably lighting my career on fire with one tweet but this has made the rounds and it (justifiably) does not look pretty. But since I'm a collaborators on this, I feel like I should at least try to inform because the article does not do the (good) science justice. 1/
So they CRISPRd stem cells with a Neanderthal genetic variant. Then grew "Neanderoids", small brain organoids, with the purpose of hooking them up to crab robots, in order to have robowars between modern human organoidbots and Neanderoidbots. Okay then.
just finishing my reviews for neurips 2024:
I'm happy to report that we have solved the brain, once again, and at least 3 different times.
(please dear god take me out of the neuroscience / cognitive science reviewer pool)
I'm presenting at NeurIPS next week, lots of things to say about this paper, but 2 things I'm most proud of is that multiple reviewers said, despite the very complicated title:
1. it's well-written, and
2. "the idea is simple and straightforward, but not in a bad way" (!?)
Executable Research Article for the ECoG timescales paper is finally live (
@bradleyvoytek
@thmspfffr
@rudyvdbrink
):
same paper, but all figs can now be generated on-site (+ more)!
...you bet I'm milking all the tweetprints I can out of this one paper
before Lisbon, I gave a
@mackelab
teatime talk about surfing as a beginner to prepare folks for the frustration
but it was actually about the PhD: in SD, I realized surfing as a beginner is a lot like starting a PhD, when you, invariably, also suck
this was my guide for both:
you know you're a senior PhD student when a simple blog about
@neuromatch
turns into a medium-sized rant about the institution, AND a historical review of cognitive science and computational neuroscience.
take-home message: NMA2020 was f-ing rad 🎉🔥🧠
it's happening folks
I'm defending my PhD on Oct 26, 2020, 11AM PST.
there will be ECoG, PSDs, 1/f, E-I balance, timescales, gradients, organoids (no consciousness though), and moar brain data
it's been real, neurotwitter. Zoom link in abstract & here:
I have a "thing" for well-written reviews, and this one is immediately at the top of the list:
informative, beautifully and clearly written, and very well structured. should be an exemplar text even if you're not interested in the particular topic
having been thwarted by covid once already, I was waiting until my ass is in an office chair to tweet this:
I am in Tübingen!
moving during peak pandemic is life on legendary-mode, but I'm extremely grateful for all the people that reached out to feed and/or welcome me!
also, and I quote:
"...oscillations as a functional readout are a very simple, ubiquitous and theoretically entirely understood phenomenon of coupled neural systems..."
WELL SHIT why didn't nobody tell me???
holy shitballs - I mean brainballs - our oscillating organoid paper finally, finally got published last week.
I wrote about the entire (5-year) process and explained the science here:
It's an insider's look into this whole thing. I think it's hilarious.
reading Eve Marder reviews makes me feel like my PhD diploma is a participation award I got at a neuroscience-themed kindergarten
"congratulations!! you know slightly more than nothing!"
I had the distinct pleasure of writing something for
@SCglobalbrain
about dynamical systems in the brain,
#neuralmanifold
, and some great Cosyne19 presentations.
Adding to the pantheon of "your brain is an orchestra" analogies. Really digging the dope artwork!
made this cartoon about reading papers, very proud of the chuckles i got today
(i feel like i may have seen this and subconsciously ripped it off from somewhere...)
A quick tutorial/blog post (with code) on the Hilbert Transform and instantaneous amplitude/phase measures in (M)EEG/ECoG/LFP, where I find out scipy.signal.hilbert() is a lie.
Perfect light reading for your plane ride to
#SfN18
i spent 2 full days last week tearing my hair out "debugging" code that was failing on the cluster
turns out, i wasn't allocating enough memory and the entire solution was to add another 0 to the job script
remember, kids: it's not imposter syndrome if you're actually an idiot
a bit late but finally finished the COSYNE2022 blogpost. In it:
1. some works I found particularly interesting at the main conference
2. musings on "how to conference"
3. a report about our (best!) workshop on neural timescales with
@roxana_zeraati
I'm a bit of a connoisseur when it comes to faking neural time series, and Julius' thing is really, really good.
why waste GPU on fake images when you can use DDPMs to make fake brain data?
also, we're at Bernstein this week (...if you still need to fake data for your poster)
Denoising Diffusion Probabilistic Models (DDPMs) have been a game changer for image & audio generation. But can they also be used for neural signals?
With Julius Vetter,
@jakhmack
&
@_rdgao
, we adapt DDPMs to generate multivariate and densely sampled EEG, LFP & ECoG signals! 1/6
rant of the day: we NEED a better f-ing way to catalogue oscillations other than just frequency
there's a billion features, collapsing it down to a single number that also ROUGHLY falls within a 30Hz range helps nobody, esp when they correlate to every f-ing thing in behavior
we know there is a diversity of cell types, network topographies, and other physiological mechanisms that shape network dynamics.
can we include them all while automating fitting spiking networks to real brain data?
Poster III-38 tonight at
#cosyne23
finally got around to finishing this one:
an (n=1) insider look at how some schmuck (me) bumbled through teaching a whole course on analyzing brain signals at a frigging R1 institution.
I can't not lol when I think about this absurd situation.
4/52
"sell me this pen"
except it's even more useless than a pen. It's your research.
plus some thoughts on the narrative and the creative component of problem-creation in science.
2/52:
We know that the diversity of cell types, network topographies, and other factors shape network dynamics. Can we include more as free parameters in spiking networks & fit them to real neural recordings? See
@_rdgao
for inverting clustered AdEx networks with sbi
Session II, C-07
18 months ago, there was the Nth re-run of "are oscillations real?" Twitter debate, in which
@dlevenstein
referenced a dinky rant I wrote 2 years before THAT
now, we have a real review/perspective paper—feel free to cite in future re-runs of this debate aka literally right now
Preprint alert 📢
with
@dlevenstein
,
@prokraustinator
, Bradley Voytek,
@_rdgao
In this work, we evaluate the theoretical status of neural oscillations and local field potentials generally. A thread... 🧵
I have 30 Whitecastle sliders.
If you or your friend is hungry at a poster, or just plain hungry, uh...reply here for a slider I guess.
How long would these last?
#SfN19
@overheardsfn
Terry Sejnowski, wearing the Terry Vest under a suit jacket, just talked to a whole
#COSYNE19
crowd about oscillations and traveling waves, also said that oscillations are nonsinusoidal and bursting.
I feel so validated after a week of single units and latent manifolds.
what I like the most about this is that all the raw data are freely available, and all the code and intermediate data are live here (albeit ugly):
so anyone can go and prove us wrong, right now, and you can save everyone some time through the peer review
Long story short: there WILL NOT BE Neanderthal vs. homo sapien organoid robot battles! That doesn't seem to be a direct quote, I don't know why tf Alysson would say that, if he did, and I don't know why that sentence would be written that way in the article if he didn't. smh. 2/
There will NOT be pitting robots against each other - neurotypical, autistic, or Neanderthal. Not in the next 10 years and certainly not on my watch. The point of the robot is to see how interacting with the environment shapes early neurodevelopment. NOT BATTLEBOTS. 9/
the bittersweet feeling of finishing, and soon leaving the lab gang and my home for the last 6 years, was overcome only by the overwhelming support of
@bradleyvoytek
, mentors, friends, family, and all the kind messages.
alas, got to leave the nest - across the pond to Tübingen!
thesis is officially approved, and defense talk is online:
find both here:
final week in San Diego, after 6 years...holy shit. too many feelings for a tweet, the least of which is how cringe it is to watch myself give a talk
it's happening folks
I'm defending my PhD on Oct 26, 2020, 11AM PST.
there will be ECoG, PSDs, 1/f, E-I balance, timescales, gradients, organoids (no consciousness though), and moar brain data
it's been real, neurotwitter. Zoom link in abstract & here:
i am but a humble neuroscientist walking amongst the gpu-rich, but SFN did give me the superpower of going to bed at 3 and showing up at 8:30 4 days in a row
The goal is not to build a robot because "fuck it why not". The goal (at least mine) is to get at one of the fundamental questions (brain in a vat) in philosophy & cogsci. I don't know if they are conscious, but I'm fairly sure they are much less conscious than lab mice. 7/
Jordan Peterson reenacting the moment of existential dread every comp neuro PhD student faces when they first get told "all models are wrong", 8.5/10 performance
day 9 of
@neuromatch
: control systems theory after HMM and Kalman filter, boy it really makes learning easier when somebody put a lot of thought and effort into designing course curriculum.
this makes me almost look forward to reinforcement learning tomorrow. almost.
"To dismiss observable signals as an epiphenomenon assumes a level of knowledge that no one has ... The attitude boils down to “they don’t play a role because we kind of already know how the brain works”. No, you don’t. No one does."
from
@MillerLabMIT
a few people have told me recently that I should not let humility get in the way of celebrating milestones and achievements, and they were right.
so here's a very public reminder to myself that I'm a baaaaaad man.
(ok more serious thread below, on failures in academia)
While we're at it, and as
@_rdgao
is way to modest too mention it himself: He was recently also awarded a prestigious Marie Skłodowska-Curie Fellowship! We are extremely happy to have Richard in our group
@mackelab
@ml4science
@uni_tue
, and immensely enjoy doing science with him!
How can we train biophysical neuron models on data or tasks? We built Jaxley, a differentiable, GPU-based biophysics simulator, which makes this possible even when models have thousands of parameters! Led by
@deismic_
, collab with
@CellTypist
@ppjgoncalves
Finally, we (scientists) tend to blame the media a lot for misrepresentation - I think this is certainly valid here with how some of that is written. But this is a good example of the scientist stretching the science for shock value and I'm certainly not happy about it. 10/
Personally, I think the diff schools of thought on cognition - especially embodiment and dynamics, in addition to info-processing - are valuable in advancing not only "cognitive science", but neuroscience and AI.
Has that been your experience, as a cognitive scientist, as well?
day 2 of
@neuromatch
. starlog > *narrator* : it did not get easier.
8 hours on zoom: philosophy of computational modeling while wrangling visual+vestibular input integration, then brainstorming potential projects.
we didn't get through it all. but we got through a fuckton 🎉🧠
raw, unfiltered, director's cut on how the 3rd paper in my PhD materialized
my mom said these were way too f-ing long and my German mom (
@nschawor
) said I need to blog more so here's part 1/3 aka why am I so pedantic
day 1 of
@neuromatch
TAing is done?!
holy smokes that was a looong zoom day but the next 3 weeks are gonna be 🔥🔥🔥 cuz my pod already started debating about causality, evolutionary forces, and deep nets for modeling the brain, among a few other small topics
Sign up as student or TA - deadline May 7. Learn computational neuroscience or Deep Learning. 3 weeks (July 5-23 or Aug 2-20). Beautiful materials. Small group setting with great (and paid) TAs. Every timezone. Many languages. Great community.
day 4: I'm learning so much I think I should be paying
@neuromatch
to TA this course.
also: OLS, GLM, MLE, cross val, bootstrap, regularization, and a project proposal on a concrete question... no offense to data scientists but I think there are 1800 more of you for hire now
You're not gonna believe me, but the most incredible thing is not that I got through
#SfN2019
alive, but that I didn't have a single sip of coffee the entire time I was in Chicago.
Some reflections:
There is a lot of data in the short paper so check it out, but in summary:
we find that what's "CogSci" is predominantly Psychology, both by who publishes in CogSci and which journals cite/are cited by CogSci.
@yael_niv
@ashleyruba
so if you're not justifying the low pay, what exactly is the intent of your reply? That good training/mentorship and good pay are non-mutually exclusive? Indeed, that exists in industry. Are you saying good mentorship in academia warrants taking a 50% paycut from market rate?
why is it that the RNN folks in comp neuro make especially accessible tutorials that guide entry into research-level material? are there similar examples in any other subfields of comp neuro / ML?
real testimonial for day 5 of
@neuromatch
on PCA: "today was fun!"
oh the pure joy of seeing MNIST latent space 🥲🎉📊
also they got this beautiful thing happening. this is an accurate representation of my brain after one week
me when I was 18: "like...I'm already pretty tolerant about learning useless math but when the f is Taylor series going to help me in life, EVER?"
me on day 12 of
@neuromatch
: "ball is life? more like linearizing dynamical systems is life."
day 8 of
@neuromatch
: HMM & Kalman filter. im ded.
but I'm really loving these conceptual visualization slides (e.g., for HMM & GLM) where you can swap component functions and distributions in and out to make different models.
compneuro-legoland!
timescales are everywhere and everywhen(???) so we are stoked to have such a wide range of expertise on this topic for the workshop, across methods, organisms, and...scales
personally i'm looking forward to the scale vs. scalefree perspectives (hopefully debate)
also, beach
Really looking forward to our workshop organized together with
@_rdgao
on "Mechanisms, functions, and methods for diversity of neuronal and network timescales"
@CosyneMeeting
, here is a tentative schedule:
day final of
@neuromatch
: 80+ hours of zoom later, I can't believe it's over. I'm gonna miss these ponies next week.
it has been so incredibly inspiring and intense for the last 3 weeks that things felt...normal.
raincheck on the end of summer school beers but it'll be on me.
We ARE working on neurotypical organoids and we have found oscillatory activity in the network. This is true, and we are super excited about this (or, I think am, not sure if
@bradleyvoytek
still wants anything to do with this after this publicity stunt). Preprint pending! 5/
But I decided to write this anyway because I think we have a responsibility to fix shitty "news" articles (no surprise it's in
@sciencemagazine
), especially when pertaining to our own stuff, ESPECIALLY if it's actually genuinely cool stuff misrepresented (by media and PI) !! end/
I'm not working with the "Neaderoids" (..wtf guys). But as written in the article, it's just 1 gene swapped out with CRISPR. Again, I don't know why he would say "creating Neanderthal mind", but the lab is composed of biologists, not cognitive scientists. So please excuse him. 3/
We're NOT (yet) wiring the organoids to crab robots. The robot is a distant goal following a series of planned experiments with open-loop, close-loop, and finally, embodied feedback stimulation to the organoid, since we now see real oscillatory network activity in them. 6/
I'm gonna be burned at the stake for this but I've been having this nagging feeling for a long time now that action potentials in neurons are not "designed for communication", whatever that means.
one of the most valuable and unintuitive thing I learned in science is that good people—formal mentors, colleagues, or people in the community (like Patrick)—amplify one's opportunities and knowledge exponentially
I often meet with grad students and postdocs interested in industry. I tell them how important it is to contact people. Most jobs aren't posted, and most people are nice. However, students often push back, thinking networking is transactional and a bit icky 1/
I'm stuck in some kind of weird nerd Groundhog Day scenario where I Google a new thing and find out it's all computations and always has been.
(yes the meme is in there, obviously)
day 14 of
@neuromatch
, comp neuro in one sentence:
"dot products and nonlinearity with some stochasticity, maybe evolving over time, fit to a loss function with optimization..."
"wait isn't that just machine lear—"
"... oh and that's how the brain does it."
day 11 of
@neuromatch
: LIF and *whisper* E/I balanced spiking neural networks, synaptic plasticity models...
whew, I feel like I know stuff again, finally
IDK why they made a point to say "robots resemble crabs". It's just a hobbyist robot that one of the students (who is, get this, a high school freshman) bought online that has more sophisticated servo motors. It's actually turning out to be a pain in the ass with so many legs. 8/
I know this is serious but I can't stop lol-ing at UCSD trending because of the coronavirus shutdown...
...and the imminent mass realization that physical universities in the 21st century is a literal scam that can be replaced by MOOCs in two weeks.
this paper's nuts. for sentence classification on out-of-domain datasets, all neural (Transformer or not) approaches lose to good old kNN on representations generated by.... gzip
as I relearn a bunch of math things in grad school that is finally applicable to (my) real life, one thought repeatedly pops into my head:
how the f did they ever let me graduate from college?
no fake blue check but board YY2 tomorrow in the dreaded Wed afternoon session:
fitting AdExIF spiking neural networks to neural population data to infer underlying circuit parameters - reduce graduate student pain of manual parameter search! with
@deismic_
and
@jakhmack