assistant professor of computational cognitive science at
@DondersInst
and
@AI_Radboud
· she/they · cypriot/kıbrıslı/κυπραία · σὺν Ἀθηνᾷ καὶ χεῖρα κίνει
Four themes of my research/papers that come up often; a thread of threads in no particular order:
➮ AI-driven dehumanisation 🤖🙅🏻♀️
➮ reclaiming AI as a (cognitive) science 🔎💭
➮ metatheorising in (the special) science(s) 🔬🧠
➮ teaching programming inclusively 👩🏻💻👩🏾💻
I'm an asst prof. I don't work after 6pm and I don't work weekends. Maybe I'll be punished. But I'm putting it out there because I ABSOLUTELY love my job. But I love my mental health too. And those boasting they love to work, defining it as overwork, set a biased example. 😌
Hot take: NO ONE actually wants AI to:
1) Book their vacations
2) Order a random pizza
3) Have access to their bank account
Why are we creating tech no one wants?
I'm incredibly pleased and overwhelmed to share some huge personal news: I have accepted a position as Assistant Professor in Computational Cognitive Science at the Donders Centre for Cognition and the School of Artificial Intelligence [
@DondersInst
@AI_Radboud
@CCS_donders
]! ☺️
No-one has ever been able to replicate Gregor Mendel's observations of pea plants.
They're a little "too perfect", lacking even random statistical noise that would have been expected from small sample sizes.
Was it scientific fraud?
Ooooop. Just got told by GP I can't have ADHD because I am an academic and wait for it... "if you have high IQ, you cannot have ADHD" WTF I cried ofc but was told "time for my next patient" so I left ofc...
One of the most dangerous things many new Ph.D. students believe is that science is "pure" & less biased. It shuts down critically thinking about how unjust & unmeritocratic science culture is. The sooner we realise we're not some magical paradise of fairness the better.
#phdchat
The video on the left was one of ~20 shared by OpenAI in its announcement of Sora, its text-to-video generator. It claims the video was, as this viral tweet notes, "generated by Sora."
The video on the left is from Shutterstock, with whom OpenAI has a partnership.
It's plagiarism not because machines can write something, but exactly because machines cannot. It's regurgitated human writing with the attribution almost impossible, the theft almost perfectly carried out by corporations who profit off our stolen labour.
AI literally has made us unable to communicate/search for stuff, e.g. "Greek present perfect continuous":
1) Greek does not have this tense, and yet it's the top result on Google.
2) να is not a verb, "Έχω διαβάζοντας το βιβλίο για μία εβδομάδα." is nonsense.
"Sometimes academic papers that are so complex you need to reread them multiple times to understand ...are bad writing."
This is something I wish I knew/was more sure of as a PhD student when I saw dense-ass papers and thought it was 100% me.
#PhDchat
#AcademicChatter
This is so cool. Automatically create a
@Docker
from your
@github
repo and create a
@ProjectJupyter
notebook too! Great idea to help with code lifespan!
Super proud/excited to share our [
@spookkachu
@kleinherenbrink
] paper "Pygmalion Displacement: When Humanising AI Dehumanises Women" wherein we develop a lens to help us trace a type of harm towards women within/by AI as a field & as a technology:
1/4
I keep hearing this logic, so I want to dispel it: nothing about getting a PhD makes one smart. A non-negligible number of people I know have PhDs, and I don't trust their judgement/intellect on anything remotely to do with work or daily life. 1/3
I hope scientists realise how this happens in science this way, especially with citational exclusion and lifting ideas from women and other minoritised groups in science exactly because as hbomb says you only plagiarise if you do not respect others: "plagiarism is an insult"
Twitter friends and frenemies, I have HUGE news that is literally so exciting I sometimes have trouble calming down to sleep at night:
I am moving to the Netherlands in October to work [even more closely] with
@andrea_e_martin
!
🥰
@andrea_e_martin
& I present: On logical inference over brains, behaviour, and artificial neural networks!
Formal logic allows us to describe metatheories in use in cognitive computational neuroscience, and helps us spot formal/inferential fallacies!
1/4
Hi all,
@andrea_e_martin
& I are excited to share our preprint:
How computational modeling can force theory building in psychological science
In it we present our view of psychology and how computational modeling should play a radical and central role.
No, we're not. I adore computers and studied computer science because I love them. AI is their misuse. Nobody who loves something wants it to be used for harm. 😌
The people claiming AI is useless are the same types of people who were shitting on PCs, smartphones and the internet during their early days.
These people lack vision and insight. Don’t listen to them.
Our [
@andrea_e_martin
] paper is OUT‼️
We present our path model of science. And how computational modelling forces us to confront our intuitions that remain unexamined — over and above stewardship of experimental practice (e.g., preregistration).
🧵1/n
As a compromise between UK and USA spelling, I propose using the appropriate Greek letter instead, e.g., colωr, behaviωr, categoriσation, modeλing, licenσe, prograμ.
Done. You are welcome.
New blog post!
👩🏻💻👨🏿💻👩🏾💻👨🏽💻👩🏼💻👩🏿💻👨🏻💻
Why women in psychology can't program
"About two months ago my brother, who works in a data science on social psychology data, asked me why his colleagues, who are women and have PhDs in psychology, cannot code"
I'm starting to think few people understand what intellectual property and labour theft are. The free and open source software & open science moments aren't about giving all our labour and copyright away for free to big technology companies. They are literally about the opposite.
A new paper in Nature found that you cannot, in fact, train AIs on AI-generated data and expect them to continue improving.
What happens is actually that the model collapses and ends up producing nonsense.
As promised! A website I built with my own hands with the list of women and gender minorities in computational cognitive science!
👩🏼💻👩🏾💻👨🏿💻👩🏻💻👨🏽💻👩🏿💻
Pls RT & thx for the support! Esp to
@chbergma
!
I'm tweeting this because I keep getting asked offline:
Yes, I can code.
Yes, I can pick up a new programming language/tool in a few hours.
Yes, I can teach others to code at a basic level within a day or so.
Yes, I can FUCKING code.
FUCK.
I'm incredibly humbled and thankful. I laughed and cried, probably mostly cried, when I first knew; incredibly overwhelmed; feeling very loved. Thank you to my students, and my colleagues who supported me. 💞
What's up with undergrads missing class constantly due to sickness? I've heard this is a thing at multiple institutions. My class has been repeatedly decimated.
Some really eye opening stuff on how IBM, Meta, Nvidia and HuggingFace are lobbying against AI regulation.
They’re spending millions and have dozens of full-time lobbyists desperately trying to avoid government oversight of their work.
Do my followers know there is a "sci-hub" for books? You can literally pirate whole ass books. Did you know this strange and unusual fact? What a fact. Don't pirate books though — very bad!
Sometimes I say eee-ther other times I say eye-ther.
Sometimes I say lay-tex other times I say lay-tek.
Sometimes I say soo-doo other times I say soo-doh.
BUT I WILL DIE BEFORE I SAY jif for gif. Anybody else like that? 😇
To all the people out there who can't really code but want to learn: what can I do to help you?
Would virtual office hours help, blog posts working through basic stuff, custom mini-lessons, something else?
Reply below and please RT. I want to hear all your suggestions.
☺️
The debunked bigoted racist Victorian pseudoscientific theory of physiognomy is alive and well in 2020! Many people are basically ignoring we have full blown ethics and undead pseudoscience issues. When is the system going to be changed..? It allows through actual deadly BS!
Hi all, my "What Makes a Good Theory? Interdisciplinary Perspectives" Lorentz workshop, keynote () paper is finally out — with shamelessly stolen title ‼️
What makes a good theory, and how do we make a theory good?
1/
During my PhD, I lived off £900/mo (my rent was £600, so £300 for all other stuff) in London. I had no family support, neither financial nor psychological. I couldn't really cope as an immigrant. I somehow made ends meet — but I could never ever put myself through that again. 1/2
On deskilling, we always knew; it's the AI hypers who want us to forget: "This experience has demonstrated that it is impossible to create an absolutely reliable automatic system, and sooner or later people face the necessity to act after equipment fails." — Valentina Ponomareva
So I knew I could too. This being said, PhD as a process does make one likely know a lot, a great deal, about something. But knowing a lot is not the same as being competent, fast thinking, intellectually interesting, kind, etc. It helps, but it's by no means the whole story. 3/3
Now I'm an ass prof, I can't stop thinking about my science teacher when I was 9 (!!!) saying I cannot and should not be a scientist. Guess 9-year-old me knew what was up life wise.
The number of people who identity as computational/cognitive modellers and who think LLMs are human-like in a non-superficial or realist way is too high. I'm worried my own field doesn't know what a model is.
I'm gonna do a small thread on: Why did I coin
#bropenscience
?
The shortest answer to the question is: because it’s amusing and seems to upset exactly the right people, while also drawing attention to behaviours and ideas that harm
#openscience
and its adoption in deep ways.
I can't believe I still have to say this, but LLMs are OBVIOUSLY capable of reasoning.
You can literally watch them reason IN PLAIN ENGLISH in front of your very own eyes.
The cope around this is unreal.
Nothing about a PhD makes you automatically smart. 10 years ago when I was finishing my PhD, what kept me going was that I saw some of the absolute most dumbass/abusive humans — failures floating upwards, weaponised incompetence out the wazoo — getting PhDs no problem. 2/3
Very excited to announce I'm creating and teaching my very own module called "AI as a Science" at
@AI_Radboud
; with the ideas and support of
@IrisVanRooij
and
@JohanKwisthout
‼️ I'm extremely happy and inspired by this opportunity — really bringing all my interests together. ☺️
This is why when AI bros claim they don't understand their models you should laugh at them and not take it as evidence the model is somehow brain-like or mysterious
Hey, boomers, take note. This is how you grow old and are cool. Be like Liliana.
"I appealed to the conscience of everyone and thought that a commission against hatred as a principle would be accepted by all"
Wanted to share this amazing essay from artist Dave Palumbo. Who btw is an incredible artist so he definitely knows what he’s talking about!
So well written, and I agree so much with this. Worth a read:
The weirdest part is the people who guzzle this down as fact, and evidence of some amazing AI capabilities, can just watch any movie like Ex Machina or Her and believe that if they were consistent. This is just as make believe. Why are we accepting adverts as factual these days?
Say hello to GPT-4o, our new flagship model which can reason across audio, vision, and text in real time:
Text and image input rolling out today in API and ChatGPT with voice and video in the coming weeks.
"correlation does not imply cognition"
Guest, O., & Martin, A. E. (2021, October 6). On logical inference over brains, behaviour, and artificial neural networks.
Delighted to say my paper with
@andrea_e_martin
is now improved, polished, and published!
Guest, O., & Martin, A. E. (2023). On Logical Inference over Brains, Behaviour, and Artificial Neural Networks. Computational Brain & Behavior.
1/3
@andrea_e_martin
& I present: On logical inference over brains, behaviour, and artificial neural networks!
Formal logic allows us to describe metatheories in use in cognitive computational neuroscience, and helps us spot formal/inferential fallacies!
1/4
They are hyping the hell out of AI girlfriends. Lemme say... We (
@spookkachu
@kleinherenbrink
) have a paper ready on this, see QT. This is not good and never has been.
Super proud/excited to share our [
@spookkachu
@kleinherenbrink
] paper "Pygmalion Displacement: When Humanising AI Dehumanises Women" wherein we develop a lens to help us trace a type of harm towards women within/by AI as a field & as a technology:
1/4
Recently, I've witnessed some senior academics mock PhDs & postdocs for choosing industry. Apart from the utterly baseless points raised (hours are flexible in many data science jobs, workloads are often similar, etc.) — it's completely out of order in general. I see you.
I'm in my 30s, not a parent & I never ever want to be one, so take this with a pinch of salt, but... My parents never limited my device time. I was allowed to use my computer as long as I wanted. I'm a computational modeller now like 20 years later. Make of this what you will. 😂
Let's be serious: men think AI tools that plagiarise are huMAN-LIKE because that is what they do. They steal the ideas and labour off of anybody they can get away with, and attribute ideas to others only when it adds prestige, see 3.4 Minimizing harm:
This is really funny because it's not what's happening (they aren't building brains in any real sense) and it's not how we understand things scientifically. We don't recreate planets nor plants to understand them in physics and biology, we build and curate models and theories.
“This is a prototypical example of how capitalism is predicated on exploitation. The researchers are performing a tremendous amount of highly skilled labor, and that labor is simply not compensated. The only ones receiving compensation are Taylor & Francis and Routledge.”
Excited to share this short & sweet article I wrote with
@samhforbes
! Teaching coding inclusively: if this, then what?. The title perhaps says it all; we show through arguments & examples what women experience in programing class & what to do to remedy it.
nonsense: either the neural network doesn't store words & indeed then the output is also not words (it's all binary/numbers or wtv; this is all on a computer after all) or the net indeed stores words just as much as anything can store words that is not a mind (paper, binary, etc)
Does anyone have any links to critical work on using LLMs to code? Showing how garbage it is but still it's being used or how it plagiarise but users don't know? Or are we/coders actually not even falling for the hype? I hear from outside academia some are using it, sadly. 🫣
Marie Neurath is amazing and her book on cognitive science for kids is wonderful: Machines which Seem to Think (1954). H/t
@IrisVanRooij
for this video about her. Book available for free online.
@o_guest
@DimitrisPapail
That's actually just an observation which happens to be true, so that's kind of hard to make sense of as a claim. It *obviously* correlates. The question is why. Do you *really* have your phd?
Why we need computational modelling: even if everybody agrees what the area of a circle is defined as there are apparently people unwilling to execute the formal model itself and not only that but the results are counterintuitive. 😂
Our neocortex is entirely controlled by our lizard brain.
Our future superhuman exocortex will be controlled by our neocortex.
We can design AI to have superhuman intelligence *and* be submissive.
For an entity to control another, it has to *want* to take control.
PSA: I don't retweet unpaid internships. I think they are inherently problematic because only very specific groups of people can afford to work without pay. If I ever RT one accidentally, lemme know, it's not something I want to promote or be associated with.
People who propose using LLMs to automate science are not only 100% wrong (see
@IrisVanRooij
), but also they are dooming us to never achieve "progress". They want science to be rehashing the past. Always geocentric, always adding epicycles. Always wrong.
In my work, I tend to focus on how generative AI can be, will be, and already is used in workplaces—as leverage against workers, as a tool for mgmt to erode status and automate jobs—but it poses other threats.
@parismarx
beatifically articulates one here:
Hi everybody! I'm incredibly proud and excited to share my article (with Andrea Caso and Rick Cooper) on investigating through replication and experimentation neural network models in
@CompBrainBeh
:
On Simulating Neural Damage in Connectionist Networks
I am going to talk a little about something that — yes, I know, I talk about it a lot — but it's been especially playing on my mind lately:
Learning how to code.
👩🏻💻👨🏾💻👩🏼💻👩🏿💻👩🏽💻
It's shocking that people need to be told this, shown this even. You cannot expect a model created to do X to have any meaningful interpretable relevant performance on Y. But because we've over-hyped deep learning and lose our perspective completely, it has to be said, I guess!
There was an amazing long article I read once (pre LLM hype) on how software is written these days without care for beautiful crafting & complexity, undoing hardware optimisation & ignoring hardware constraints... Has anyone else seen this? It was so good. I wish I had the link.
Very excited to announce,
@IrisVanRooij
and I are soon to be looking for a PhD candidate to work with us in Computational Cognitive Science group, at
@DondersInst
&
@AI_Radboud
! I will post the link for applying as soon as it is up.
More info here:
🚨 Vacancy: PhD candidate in Meta-theory in Cognitive Science (0.6fte) / Junior lecturer Computational Cognitive Science (0.4 fte). Position is for 6 years. Deadline to apply July 3. Please RT and/or consider applying & come work w/
@o_guest
& myself
@DondersInst
@AI_Radboud
🧵👇