So. About
@OpenAI
. I've been bombarded by sales calls from rival LLM companies seeking some opportunistic business wins. That's fair game and business is a contact sport so I get it. And we were already evaluating backup providers as a hedge before the drama. But the reason we're
Joined OpenAI 10 days ago. This place is absolutely electric and very special. So excited to be on this journey to advance AI and make it useful for all of humanity with
@sama
,
@miramurati
,
@gdb
,
@ilyasut
,
@npew
,
@bobmcgrewai
and the whole OpenAI team.
How the entire OpenAI team came together and showed support for each other during a time of crisis will be one of the most remarkable and cherished experiences of my life.
Proud and honored to be working with such an amazing team.
Sam Altman is back as CEO, Mira Murati as CTO and Greg Brockman as President. OpenAI has a new initial board. Messages from
@sama
and board chair
@btaylor
Introducing Sora, our text-to-video model.
Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions.
Prompt: “Beautiful, snowy
Super excited to welcome the amazing team at Rockset to OpenAI! This will help us give better tools to users, developers and companies to leverage their own data to build more intelligent applications.
We are working with Apple to integrate ChatGPT more deeply into iOS, macOS and iPadOS to augment Siri, to help people write better, ask questions about documents and photos, and generally make advanced AI more useful to Apple users.
We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.
We are collaborating to figure out the details. Thank you so much for your patience through this.
We're rolling out interactive tables and charts along with the ability to add files directly from Google Drive and Microsoft OneDrive into ChatGPT. Available to ChatGPT Plus, Team, and Enterprise users over the coming weeks.
I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
Announcing the o1 series of models - trained using large scale RL to think productively before answering, they represent a significant step forward in reasoning capabilities - which helps solve harder problems in science, math and coding.
🔥Exciting news -- GPT-4-Turbo has just reclaimed the No. 1 spot on the Arena leaderboard again! Woah!
We collect over 8K user votes from diverse domains and observe its strong coding & reasoning capability over others. Hats off to
@OpenAI
for this incredible launch!
To offer
ChatGPT for Android is now available for download in the US, India, Bangladesh, and Brazil! We plan to expand the rollout to additional countries over the next week.
People have amazing ideas for what to build with AI.
It was fun being a judge and seeing many of these cool prototypes at the weekend hackathon organized by
@southpkcommons
+
@OpenAI
Great hacker energy overall and congrats to the winners!
Looking forward to meeting with builders and founders in Bangalore on Jan 5th to talk about the future of AI.
Please register here if you are interested:
Calling AI founders and builders! Join us in Bangalore on Jan 5 for a conversation and mixer with
@snsf
who leads ChatGPT and the developer platform at
@OpenAI
. Register here:
@aaditya
@peakxvpartners
We’re testing ChatGPT's ability to remember things you discuss to make future chats more helpful.
This feature is being rolled out to a small portion of Free and Plus users, and it's easy to turn on or off.
Introducing GPT-4o mini - which is significantly smarter and 60% cheaper than GPT-3.5T. Another step towards making AI more broadly beneficial for all.
Introducing GPT-4o mini! It’s our most intelligent and affordable small model, available today in the API. GPT-4o mini is significantly smarter and cheaper than GPT-3.5 Turbo.
Our new assistants API launched last week. Highlights:
-- New file search tool with improved knowledge retrieval and support for 10,000 files per assistant
-- More control to set maximum input and output tokens per run to control cost
-- Support for tool choice to control whether
Introducing a series of updates to the Assistants API 🧵
With the new file search tool, you can quickly integrate knowledge retrieval, now allowing up to 10,000 files per assistant. It works with our new vector store objects for automated file parsing, chunking, and embedding.
GPT Store is live. I've had fun playing with Consensus, AllTrails and Books GPTs and am looking forward to using more from our amazing creator community.
Memory is now available to all ChatGPT Plus users. Using Memory is easy: just start a new chat and tell ChatGPT anything you’d like it to remember.
Memory can be turned on or off in settings and is not currently available in Europe or Korea. Team, Enterprise, and GPTs to come.
You can now create GPTs - customized versions of ChatGPT for a specific purpose with custom instructions, expanded knowledge and custom actions - with no code.
GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home — and then share that creation with others. No code required.
GPT-4 Turbo with Vision with json mode and function calling is now in GA. Some great apps built with it - Devin by
@cognition_labs
(AI software engineering assistant),
@healthifyme
(nutrition advice from photos), Make Real by
@tldraw
(website creation from whiteboard drawings)
GPT-4 Turbo with Vision is now generally available in the API. Vision requests can now also use JSON mode and function calling.
Below are some great ways developers are building with vision. Drop yours in a reply 🧵
New features in our fine-tuning API and custom models program:
-- Saving checkpoints
-- Side-by-side comparison in the Playground
-- Various other improvements to the fine-tuning Dashboard
-- A new assisted fine-tuning offering to have a lot more flexibility in the process
We’re rolling out web browsing and Plugins to all ChatGPT Plus users over the next week! Moving from alpha to beta, they allow ChatGPT to access the internet and to use 70+ third-party plugins.
Introducing the Batch API: save costs and get higher rate limits on async tasks (such as summarization, translation, and image classification).
Just upload a file of bulk requests, receive results within 24 hours, and get 50% off API prices:
Model Spec is our document that lists objectives and rules for how we guide model behavior.
You can now give feedback on this to shape the future of how our models respond.
SearchGPT: a prototype that showcases a new way to search by combining AI models with real-time web information. We plan to integrate this into ChatGPT later.
Streaming is now available in the Assistants API! You can build real-time experiences with tools like Code Interpreter, retrieval, and function calling.
ChatGPT can now see, hear, and speak. Rolling out over next two weeks, Plus users will be able to have voice conversations with ChatGPT (iOS & Android) and to include images in conversations (all platforms).
We've just launched fine-tuning for GPT-3.5 Turbo! Fine-tuning lets you train the model on your company's data and run it at scale. Early tests have shown that fine-tuned GPT-3.5 Turbo can match or exceed GPT-4 on narrow tasks:
ChatGPT Team - our new self-serve plan for teams of all sizes with enterprise grade data privacy & security and access to models and tools is now live.
Introducing ChatGPT Team: A new plan for teams of all sizes with access to advanced models and tools, business-grade data privacy & security, and the ability to create and share custom GPTs.
Upgrade from your ChatGPT account.
ChatGPT for Enterprise is ready! You can now have ChatGPT that makes you more effective at work, is fully private and secure on your data, and can be more customized for your organizational needs.
Introducing ChatGPT Enterprise: enterprise-grade security, unlimited high-speed GPT-4 access, extended context windows, and much more. We’ll be onboarding as many enterprises as possible over the next few weeks. Learn more:
@angrymaya1980
We're continuing to work on a fix. The underlying issue is due to an issue with our database replicas. ChatGPT and non-completion API endpoints are partially impacted, while completion API endpoints including chat completions are only minimally impacted. We will post as we have
We’re releasing a guide for teachers using ChatGPT in their classroom — including suggested prompts, an explanation of how ChatGPT works and its limitations, the efficacy of AI detectors, and bias.
GPT-4 API is now available to all paying OpenAI API customers. GPT-3.5 Turbo, DALL·E, and Whisper APIs are also now generally available, and we’re announcing a deprecation plan for some of our older models, which will retire beginning of 2024:
Our new text-to-image model, DALL·E 3, can translate nuanced requests into extremely detailed and accurate images.
Coming soon to ChatGPT Plus & Enterprise, which can help you craft amazing prompts to bring your ideas to life:
Code Interpreter will be available to all ChatGPT Plus users over the next week.
It lets ChatGPT run code, optionally with access to files you've uploaded. You can ask ChatGPT to analyze data, create charts, edit files, perform math, etc.
Plus users can opt in via settings.
ChatGPT can now browse the internet to provide you with current and authoritative information, complete with direct links to sources. It is no longer limited to data before September 2021.
We’ll be hosting our first developer conference, OpenAI DevDay, on November 6. Registration to attend in person in San Francisco will open in a few weeks. We’ll also livestream the keynote.
Launching new features on our API today:
[1] New Assistants API that is stateful, with expanded knowledge (retrieval), function calling and code interpreter.
[2] New GPT-4T model that has long context, fresh knowledge, more control and is 3x cheaper for input tokens and 2x
We're rolling out new features and improvements that developers have been asking for:
1. Our new model GPT-4 Turbo supports 128K context and has fresher knowledge than GPT-4. Its input and output tokens are respectively 3× and 2× less expensive than GPT-4. It’s available now to
It’s been ~1 year since we launched our breakthrough product recognition system, GrokNet. Learn how we’ve scaled & improved our technology to make shopping easier with new applications, like product match and AI-assisted product tags on
@Facebook
.
It was an absolute pleasure and honor to have been part of the judging panel of the Forbes AI 50 list. It is the third time I've done this and the companies have been stronger and more diverse in their impact than ever before. Congrats to all the winners!
Now in its fifth year, our annual list, produced in partnership with Sequoia and Meritech Capital, recognizes the most promising privately-held companies building businesses out of artificial intelligence.
#ForbesAI50
We are building a new team to focus on safety of highly capable AI systems. We are also launching a preparedness challenge to expand our understanding of areas of concerns with AI systems -
We are building a new Preparedness team to evaluate, forecast, and protect against the risks of highly-capable AI—from today's models to AGI.
Goal: a quantitative, evidence-based methodology, beyond what is accepted as possible:
Did you know, that you can build a virtual machine inside ChatGPT? And that you can use this machine to create files, program and even browse the internet?
Huge kudos to the research team for advancing the reasoning capabilities, and to the safety, product, GTM, strategy and all the other teams at OpenAI for the launch.
Looking forward to seeing all the new things people do with o1.
Introducing Custom instructions! This feature lets you give ChatGPT any custom requests or context which you’d like applied to every conversation. Custom instructions are currently available to Plus users, and we plan to roll out to all users soon!
Here
We are rolling out a new program for industry software engineers with a diverse range of backgrounds looking to transition to a career in AI.
#facebookai
Hate speech can come in many forms, including memes that combine text & images. We launched the Hateful Memes Challenge, a first-of-its-kind competition, to help the AI community find new ways to detect multimodal hate speech. Learn about the winners here:
We launched the DeepFake detection challenge at NeurIPS this year. Eager to work together with the whole AI community to tackle this important problem.
We’ve launched the Deepfake Detection Challenge, an open, collaborative initiative to accelerate development of new technologies to detect deepfakes and manipulated media. The challenge features a new, unique data set of 100K-plus videos for researchers.
We recently announced the hateful memes challenge to push research in multi-modal models and to solve an important problem for Facebook.
#facebookai
#ai
Introducing Opacus, a new high-speed library for training
@PyTorch
models with differential privacy (DP) that’s more scalable than existing state-of-the-art methods. Read more:
Results from our Deepfake detection challenge.
It was great to see over 2000 participants. The top model achieved an average precision of 65.18 - we still need a lot more innovation to address this difficult and important task.
2,114 participants around the globe entered the Deepfake Detection Challenge. We’re now sharing the winning models and insights from this first-of-its-kind open initiative to address the challenge of deepfake videos and images.