My day job is to speak in an arcane snake language to a crystal vibrating at 3,000,000,000 cycles per second sitting in a cloud so that it can alter probabilities in the real world. If that isn't magic what is.
Fun to see Self Organizing Maps make a return. I remember coding one and my ML colleagues were very skeptical it could scale etc. Interesting new use for them in interpretability
I'm 44. I co-led Google image search quality. I have a MS in architectural science (graphics). Lots of papers including ICLR , technical Emmy, stars on github, dozens of patents etc and I still get asked to write code that runs in Python in interviews. *Shrug*
I am 35. I co-lead the search engine in NL+BE's biggest e-commerce platform. I have a MS+PhD in CS. Starred in GitHub projects, blog posts, confs/meetups. But I still need to deliver a silly amount of algorithm & data structure puzzles in job interviews. 👈
#WrongDecisionsInLife
Today is my last day
@GoogleAI
for the third time. This time round worked on VR, had a nature digital healthcare paper and did deep learning on medical records and signals. Going off to do startup entrepreneurial stuff next.
I'm double vaccinated but apparently I caught the covid. Not much symptoms, I only tested because work sends us kits and I test once a month. This new variant seems to spread very fast. Don't worry about me, I'm ok, just take care of yourself and get vaccinated!
I'm really loving the machine learning infrastructure that
@DeepMind
develops. I like the interface for Sonnet for Tensorflow 2.0 and other infrastructure that they made available for use internally. Very usable and researcher friendly.
Colab, numpy and Jax are amazingly pleasant and fast to do
@Peter_shirley
's . I still haven't vectorized some parts with Jax and dealing with conditionals and the jit but the interactivity of colab makes it fun. Extra challenge making it functional
My brother is visiting tomorrow for the first time in decades. Gonna tell him in person I'm a 🌈🦄. Given that he's a conservative christian not sure how he'll react but he'll have a couple hours in the car to process while we drive to the Napa wineries with my partner 😂
It would be a huge cosmic joke if it turned out that the theory of everything can only be represented by function approximation using deep neural networks
uh TF-IDF comes from information retrieval. It's a document feature not an AI. There is no learning at all involved! " It uses a kind of (rudimentary) artificial intelligence called TF-IDF "
Data scientists - be sure to thank all the people setting up your data warehouse, label database, ETL engines, machine schedulers, metrics trackers, artifact management etc because lemme tell ya, it takes a lot of work to get the nicely packaged tensorflow examples.
Today I am grateful for python, numpy, jax, linux, pip, apt get, gpg, bash, emacs, chrome, emacs ~ backups, cold SF weather, chocolate covered coconut snacks, a roof over my head, nice co-workers, road trips, the weekend and that off tune singing in the bar across the street.
Won't it be cool if we could just publish papers as a wiki post and then people can add missing citations and upvote / downvote the post and citations and then a neat graph propagation algorithm automatically computes the relevance of the article, author h-index and all that.
It it makes y'all feel better even with a grad degree, lots of papers, lots of open source code, a technical Emmy and Google on your resume you still get to do a python coding phone screen these days.Wonder when the industry will have a portable means of software accreditation 😂
At the joint workshop on AI and health everyone seems to be using rnns and lstms but no simple baseline :/
#ICML2018
don't know if they realize that linear models can do really well
Huh someone just pointed out to me that imagenet was scraped from Google image search which was already ranked using models I had built a long time ago so that makes the images selected kinda biased to work with several generations of machine learning already.
#generalization
Man now I wanna go back to college and take operations research, electronic engineering, philosophy, comparative mythology, poetry, art, mathematics and physics. Wished I was more liberal arty in college instead of rushing through and finishing my degrees ASAP.
Twitter seems better for
#AI
networking than LinkedIn! I mostly get recruiters on LinkedIn and a lot more interesting pointers to new papers, ideas and connections on Twitter related to the fields I am interested in.
#AI
#healthcare
#EHR
#DeepLearning
Our daily update is published. States reported 63k new cases, about the same number as last Tuesday. They also recorded more than 1,000 deaths for the first time since May 29th. Today’s number of currently hospitalized (59k) COVID-19 patients is the third-highest in our data.
Our Nature digital paper on "Scalable and accurate deep learning with electronic health records" is finally published. My first
#healthcare
and
#AI
paper! My contribution was the modelling work, specifically one of the models, interpretable boosting.
I love einsum notation, no need to think through all the crazy reshapes to do something, just specify with the indices. Was thinking of how to do outer product with matmul in batches but in einsum it's just "ij,ik->jk". boom.
I'm not kidding. I went from pioneering image ranking models, self driving car perception, Emmy award winning personalized recommender systems, pioneering dynamic pricing, neural lightfield rendering, writing Nature articles to writing SQL and running automl.
I decided to do a grad degree in humanities to balance out my tech and science background. We'll see how it goes. I had lots of fun doing comparative mythology in college and ancient languages so this is an extension of my undergrad interests.
Welcome to ancestral simulator, shard number 42. In this realm you will spawn on a pre FTL civilization about to
- encounter aliens for the first time
- invent AGI
Join in fun group activities such as:
- balancing budgets
- survive a global pandemic
- survive climate change
Samy Bengio has stood by all of us in Brain Research, and it’s meant so much. Particularly for all the women who speak up. We started to call it “Samyland”, a safe space for the underrepresented and overlooked.
Today, in collaboration with the
@Harvard
Lichtman Laboratory, we're releasing a novel resource to study the human brain — an imaging dataset covering a cubic mm of cortical tissue with traces of tens of thousands of neurons and 130M annotated synapses.
I flew back last Thursday on 10 minutes notice with just the clothes on my back, just in time to hold my mom's hand and sing her to sleep with my brother. She went peacefully and gracefully as we sang, heart rate dropping slowly
Is ML the only field where every few years something new just makes entire sub branches of the field become neglected? People still learn Newton's laws with general relativity being around. Some audiophiles still like analogue over digital signal processing.
Reminds me of the time a journalist called TFIDF (word frequency counting) AI. My twitter handle is a pun on eigenvector by the way. Eigenvector decomposition is something they teach in high school in Singapore in the very first linear algebra course in grade 11....
Stitch Fix is using something called eigenvector decomposition, a concept from quantum mechanics, to tease apart the overlapping “notes” in an individual’s style. Using physics, the team can better understand the complexities of the clients’ style minds.
Used to think Science was about merit and hard work and now it feels like everything else human -- a popularity contest with in crowds and all that. Or is it just the human condition? Any good books about this part of human psychology accessible for a fun read?
How long before we get tools that look at your ML code and automatically picks parts to automatically vectorize and parallelize? For comparison it took like 20 years from when I learned about auto vectorization in compiler class to seeing my SSE code get compiled from regular C++
Strangely one way I tend to pick projects is to find roles I am underqualified for so that I can grow while in the role. It's at the edge of the explore / exploit trade-off. Find something that exploits your current skills but also lets you explore new ones!
The discourse about impostor syndrome bothers me, because it IS POSSIBLE to be underqualified for a role or responsibility
I've had multiple experiences of attempting to raise to a supervisor "Hey, I'm not doing well at this" only to be told I was doing fine. This sucked!
One odd job I do is help some companies by reviewing their AI job reqs. One of the common mistakes I notice HR/hiring managers make in hiring deep learning talent is to put “PhD preferred” in product AI research job req. This goes wrong in so many ways:
My ML tweets might seem weird like a mix of newbie and experienced and it's because I'm new to doing stuff outside of a mature walled code garden which is different from doing stuff in the chaos of the open source wilderness.