🙏🏻 thank you for saying this.
many of us are on call throughout the holidays, nights and weekends to make sure everything stays running smoothly indefinitely. no matter what bullshit is happening, our customers are everything.
People dming me as if openai is going to turn off gpt in the next week.
People of that level of integrity towards their leader, are likely going to have the same level of an integrity with their product, their service, and their customer.
Today I got a call inviting me to consider a once-in-a-lifetime opportunity: to become the interim CEO of
@OpenAI
. After consulting with my family and reflecting on it for just a few hours, I accepted. I had recently resigned from my role as CEO of Twitch due to the birth of my
Today I got a call inviting me to consider a once-in-a-lifetime opportunity: to become the interim CEO of
@OpenAI
. After consulting with my family and reflecting on it for just a few hours, I accepted. I had recently resigned from my role as CEO of Twitch due to the birth of my
copilot was massive step jump for coding assistance
but feels like we're now trying to do the equivalent of skipping straight to l5 self-driving
i want gpt-4 completion and Q/A with human-in-the-loop mostly automated context building.
@JeffLadish
you will never stop people from building systems like this. it’s better that it’s published so it can be a known unknown rather than unknown unknown
gpt-v as a means for targeted content extraction from PDFs is so far ahead of any other method i’ve seen
my parsing logic went from 100s of lines to ~20 and accuracy is almost 100%
the dalle3 -> real-esrgan -> fedex print pipeline
local fedex guy was based
once he saw what i was doing helped me tweak the settings and print a few iterations quickly rather than waiting 24h for each re-print
we did a fedex hyperparam sweep
i’m playing with livekit and it’s interesting to imagine what a full-duplex multi-modal interaction with an LLM would look like
all UXs i’ve used enforce a discrete/turn-based interaction model
@petergyang
if the GPT is supposed to answer detailed questions about the file what’s really the difference from downloading the actual content? just seems like a less efficient way to consume the data 🤔
@goodside
if you do this without using underscore it’s a nice trick. the underscore makes it confusing as hell tho because most readers assume it’s an unused argument.
Hey Steven, dm me and you can use
@Slackhq
. 😇 Salesforce and Slack and Tableau will ALL match any OpenAI researcher who has tendered their resignation full cash & equity OTE to immediately join our Salesforce Einstein Trusted AI research teams under Silvio Savarese. Send me
all these people trying to dunk on this acting like most feats of modern engineering aren’t already a certain class of physics exploits. you have a device that fits into your pocket that allows you to talk to virtually anyone on the planet with sub-second latency.
you’d have to be a midwit of astonishing proportions to agree. it would require dismissing literally the entire social and political context of the decision.
like every time I prompt chatgpt for something I have to describe the context, the important models and libraries and versions. for large code-bases you may need many of these depending on which piece you are working on.
shit im gonna build this
happy new year.
magic we imagined impossible will become commonplace, expected. and we will, once again, see breakthroughs that were once inconceivable.
incredible how much alpha there is in grokking an unknown codebase using a good debugger. directly seeing call-flow and variables change over time results in the shortest possible feedback loop on creating and testing hypotheses.
the lack of copilot and format-on-save in xcode is atrocious. the two biggest productivity gains in the last 10 years are simply not available. almost makes me want to go with react-native.
My Administration just announced a proposed rule that would ban early termination fees for cable and satellite TV.
Companies shouldn't lock you into services you don't want with large fees.
It's unfair, raises costs, and stifles competition.
We're doing something about it.
it's interesting how modality changes interaction patterns. a good speech-to-speech model should be able to interrupt and be interrupted, but isn't really full-duplex. but for a text/chat model to feel natural it should be able to manage a superposition of different convo threads
@iamrobotbear
@OfficialLoganK
@OpenAI
if you start a new job with function call training data and use your fine-tuned model as the base it should work but depending on the datasets results may vary
🙏🏻 thank you for saying this.
many of us are on call throughout the holidays, nights and weekends to make sure everything stays running smoothly indefinitely. no matter what bullshit is happening, our customers are everything.
you could maybe do this by describing to the model in-context that certain text came before the previous response, but i’m not sure this interaction is within distribution wrt typical fine-tuning data
or maybe maintaining a superposition of threads you inference in in parallel
i think even dropping multi-modal raises some interesting questions.
you would probably need a way of representing to the model messages that come during inference. i don’t think it could be represented cleanly as a thread anymore, probably more of a graph
We’ve barely started to show the world the power of robotics. Imagine sustainable and healthy food that is available at prices anyone can afford and in places never thought financially viable.
@tszzl
no concrete accusations, no signatures, no names. seems like “former employees” would have nothing to lose yet there was no substance across any measurable axis
@svpino
i agree there are new challenges with productionizing ML, but i think this is a pretty unrealistic take on what production non-ml software is like.