ChatGLM Profile Banner
ChatGLM Profile
ChatGLM

@ChatGLM

1,531
Followers
122
Following
58
Media
211
Statuses

Teaching Machines to Think Like Humans

Joined November 2023
Don't wanna be here? Send us removal request.
@ChatGLM
ChatGLM
2 months
🚀 CogVideoX-5B Video generation models Release! Bigger size, better quality, lower cost (runs on just 12GB GPU) 🎉 🔗 Code: 🤗 Model: 🌐 Try it out:
6
57
213
@ChatGLM
ChatGLM
2 months
Thank you to the passionate developers for your continued support and patience. CogVideoX-5B-I2V, release!😀 Github: CogVideoX-5B-I2V model: Gradio space:
4
55
199
@ChatGLM
ChatGLM
3 months
We have just open-sourced our text-to-video model, CogVideoX, which is similar to models like Sora or Gen3.
Tweet media one
10
35
163
@ChatGLM
ChatGLM
4 months
🚀 CodeGeeX4 open-sourced! 🎉 🔥 Top model under 10B parameters on GitHub & HuggingFace. 🔍 128K context ⚡️ 3x faster 📄 Auto-generate README 🧠 Smart code Q&A Upgrade your IDE plugin now! 💻✨ GitHub: HF:
1
29
126
@ChatGLM
ChatGLM
6 months
🚀🔥🔥 We've released CogVLM2, a cutting-edge multimodal dialogue model with improved TextVQA, DocVQA, 8K content, and high-res images (1344x1344), supporting English and Chinese. Check it out: [CogVLM2]()
Tweet media one
Tweet media two
Tweet media three
Tweet media four
10
31
114
@ChatGLM
ChatGLM
3 months
We are not just doing “demo only” for video generation. Ying, we are bringing a video generation AI that everyone can use. Create a 6-second video in just 30 seconds. Try our new product now. YING:
8
31
106
@ChatGLM
ChatGLM
5 months
🚀 Check out GLM-4! This open-source, multilingual, multimodal chat model supports 26 languages and offers advanced features like code execution and long-text reasoning. Perfect for AI enthusiasts and developers! 🌐 #AI #OpenSource #MachineLearning
6
19
89
@ChatGLM
ChatGLM
2 months
No citations in LLM answers = hard to trust due to possible hallucinations. 😤 Solution? LongBench-Cite benchmark + LongCite-8B/9B models for sentence-level citations in one shot. 🔥 #AI #LLMs #agent #LLM
1
14
74
@ChatGLM
ChatGLM
4 months
🚀 Introducing CogVLM2-Video: a breakthrough in video understanding with superior temporal awareness! 🎥🤖 Perfect for video captioning and summarization. Check it out!📊 ✨ Blog:
Tweet media one
0
16
61
@ChatGLM
ChatGLM
9 days
🎉 Exciting news! 📷 ZhipuAI has launched and open-sourced GLM-4-Voice, an end-to-end voice model that directly understands and generates Chinese and English speech! 📷🗣️
2
40
137
@ChatGLM
ChatGLM
2 months
🌈Shake hands with alien. 😈 -- by CogVideoX-5B, Open-sourcing countdown.
5
7
44
@ChatGLM
ChatGLM
3 months
🚀 Meet LongWriter-6k! 🌟 The ultimate open-source dataset for extended text, paired with LongWriter—our cutting-edge model for long-form storytelling. Boost your AI content with these powerful tools! 🔥 Dataset: GitHub:
1
7
43
@ChatGLM
ChatGLM
5 months
🚀 We published a tech report about GLM's Family! ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools.
Tweet media one
1
10
40
@ChatGLM
ChatGLM
2 months
🌞 Amidst the height of summer, 🦢 swans glide gracefully across the lake. The air is still, yet it feels as though the sound of cicadas 🦗 lingers in the silence. - by CogVideoX-5B #CogVideoX
2
3
32
@ChatGLM
ChatGLM
2 months
🌈 Live broadcast with @_akhaliq , about how to train and quickly use CogVideoX-5B🦋. 😋
0
10
28
@ChatGLM
ChatGLM
4 months
🎉 Exciting news! The GLM-4 model is now supported for deployment on Ollama! With this integration, you can now easily run GLM-4 locally, ensuring greater control and privacy over your data. 🚀 #ollama Ollama:
Tweet media one
0
2
28
@ChatGLM
ChatGLM
2 months
Tweet media one
2
6
26
@ChatGLM
ChatGLM
2 months
⚡️We just publiched new models on #KDD2024 , include GLM-4-Plus, CogView-3-Plus, GLM-4V-Plus, CogVideoX. here is the detail info:
2
2
24
@ChatGLM
ChatGLM
5 months
🚀 LVBench, a benchmark for long video understanding with hours of QA data across 6 main and 21 sub-categories. We provide high-quality annotated data and use LLM to filter challenging questions. We hope LVBench drives progress in video understanding!
0
2
18
@ChatGLM
ChatGLM
5 months
GLM-4 Open-Sourced model is coming.
0
2
15
@ChatGLM
ChatGLM
5 months
🌟 Introducing Inf-DiT, a groundbreaking method that solves the memory bottleneck in generating ultra-high resolution images like 4096x4096! 📱🌐 Code:
1
4
14
@ChatGLM
ChatGLM
4 months
Gemma 2 (2024-06-28) vs GLM-4-9B(2024-06-05)
Tweet media one
0
0
12
@ChatGLM
ChatGLM
4 months
🚀 Exciting news! Following GLM-4, codegeex also supports ollama🎉 This high-performance multi-language code generation model is perfect for code completion and AI development. 🌐👨‍💻�� Ollama:
0
4
10
@ChatGLM
ChatGLM
4 months
📢 Exciting news from Bengio's team! They've introduced a new multimodal benchmark, and guess what? 🎉 CogVLM2 is leading the pack among open-source models! 🚀
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
2
10
@ChatGLM
ChatGLM
2 months
vid2vid
@tintwotin
tintwotin
2 months
Muybridge vs #CogVideoX . Proof of concept (vid2vid) Patch by @aryanvs_ :
2
24
109
0
2
9
@ChatGLM
ChatGLM
8 months
/1 We've published the paper on CogView3, the text-to-image technology applied in . Excited to share our advancements in this field!🎉 #CogView3 #TextToImage #AItechnology #GLM4
Tweet media one
2
2
6
@ChatGLM
ChatGLM
2 months
1/GLM-4-Plus (Language capabilities)
Tweet media one
1
1
8
@ChatGLM
ChatGLM
11 months
Our new research #CogAgent , an image understanding model developed based on #CogVLM , which features visual-based GUI Agent capabilities and has further enhancements in image understanding. Paper:
Tweet media one
Tweet media two
1
0
7
@ChatGLM
ChatGLM
10 months
GLM-4 is coming! 🔥🔥🔥 The new generation pedestal large model GLM-4 has an overall performance that is 60% higher than GLM-3 and approaches GPT-4.
2
0
6
@ChatGLM
ChatGLM
2 months
@aryanvs_ @diffuserslib we have released i2v model several hours ago. 😄
2
0
6
@ChatGLM
ChatGLM
5 months
CogVLM & CogAgent & CogCom * *
@rohanpaul_ai
Rohan Paul
5 months
Nice paper surveying Multimodal AI Architectures -- with a comprehensive taxonomy and analysis of their pros/cons & applications in any-to-any modality model development 📌 𝐂𝐨𝐦𝐩𝐫𝐞𝐡𝐞𝐧𝐬𝐢𝐯𝐞 𝐓𝐚𝐱𝐨𝐧𝐨𝐦𝐲: First work to explicitly identify and categorize four broad
Tweet media one
6
155
590
0
2
6
@ChatGLM
ChatGLM
7 months
Pre-Training Loss, not model parameters, is the key to emerging capabilities. Paper:
Tweet media one
2
3
5
@ChatGLM
ChatGLM
2 months
wow, great!
@cocktailpeanut
cocktail peanut
2 months
Video-to-Video for CogVideo CogVideo video-to-video diffusers pipeline just dropped---it lets you take any video and turn it into another video. So I've added a "video-to-video" tab to the CogVideo Gradio app. Example: Turn a car driving video into a video game version.
9
56
310
0
1
5
@ChatGLM
ChatGLM
2 months
@elonmusk open-sourced "Sora" from China, 😃
0
0
5
@ChatGLM
ChatGLM
11 months
We have published an interesting work where a children's picture book can be generated through prompts. CogCartoon: Towards Practical Story Visualization,
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
0
5
@ChatGLM
ChatGLM
6 months
#ICLR2024 We presents 3 intriguing tech trends in #AGI in @iclr_conf - multimodal models, the GLM-OS computing system, and GLM-zero's unique learning approach. ChatGLM's progress and innovative research make a mark, signifying China's participation in ICLR's prominent sessions.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
2
5
@ChatGLM
ChatGLM
2 months
😋😋
@imxiaohu
小互
2 months
智谱AI发布其最新基座大模型GLM-4-Plus 以及展示了类似OpenAI GPT 4o模型的视觉能力,能进行自由语音通话和视觉推理,并宣布8月30日开放! GLM-4-Plus在多个方面表现出了卓越的性能,具体如下: 1.语言能力:
22
43
159
1
0
5
@ChatGLM
ChatGLM
2 months
cool 😋
@tintwotin
tintwotin
2 months
CogVideox-5b via Pallaidium and Blender #b3d
2
7
49
0
0
4
@ChatGLM
ChatGLM
5 months
@yihong0618 可以可以
0
0
5
@ChatGLM
ChatGLM
6 months
🚀 “Sick of endless screen time? Meet AutoWebGLM! 🌐 It’s a web navigation agent framework, inspired by human browsing patterns and built on #ChatGLM3 -6B. Intrigued? Follow our channel 😎 and dive into this article first:
0
0
4
@ChatGLM
ChatGLM
6 months
The updated ZhipuAI API now supports seamless transitions from OpenAI interfaces to ZhipuAI’s GLM series! like GPT-4 to GLM-4 and GPT-4V to GLM-4V. Simply set the base_url to access ZhipuAI models, as detailed in the GLM-Cookbook tutorials:
0
1
3
@ChatGLM
ChatGLM
6 months
GLM-4-0116
@lmarena_ai
lmarena.ai (formerly lmsys.org)
6 months
Exciting leaderboard update🔥 We've added @01AI_Yi Yi-Large to Arena and collected 15K+ votes over the past week. Yi-Large's performance is super impressive, securing the #7 spot, almost on par with GPT-4-0125-preview! Huge congrats to on this incredible
Tweet media one
18
45
245
2
0
3
@ChatGLM
ChatGLM
5 months
4/5 🔗 Explore our project and paper to dive deeper into the magic behind UniBA and how it’s revolutionizing the field of image generation. Join us in pushing the boundaries of AI! Code: Paper:
1
0
2
@ChatGLM
ChatGLM
5 months
In 1956, the Dartmouth Conference sparked the dawn of AI. 🌟 GLMs Agent takes you back to this revolutionary moment. Now fully launched! 🚀 #AI #DartmouthConference #GLMsAgent
0
0
3
@ChatGLM
ChatGLM
2 months
hh 😄
@_DhruvNair_
Dhruv Nair
2 months
tfw waiting in line for CogVideoX-5B Space ....
0
0
4
0
0
3
@ChatGLM
ChatGLM
11 months
We have released #AlignBench -- Benchmarking Chinese Alignment of Large Language Models.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
1
3
@ChatGLM
ChatGLM
2 months
6/CogVideoX
Tweet media one
0
0
3
@ChatGLM
ChatGLM
11 months
@huybery Congrats, bro! 😀
1
0
3
@ChatGLM
ChatGLM
3 months
1
0
2
@ChatGLM
ChatGLM
2 months
5/CogView-3-Plus
Tweet media one
1
0
2
@ChatGLM
ChatGLM
10 months
/2 Instruction following capability: GLM-4 achieves levels of 88% and 85% of GPT-4 in Chinese and English, respectively, at the prompt level on IFEval, and reaches levels of 90% and 89% of GPT-4 in Chinese and English, respectively, at the Instruction level.
Tweet media one
1
0
1
@ChatGLM
ChatGLM
1 year
We just updated cogvlm-chat-v1.1,😀
@skalskip92
SkalskiP
1 year
looking for OpenAI-4V alternatives? - LLaVA - BakLLaVA - CogVLM - Qwen-VL different tasks: - VQA - answering questions about images - OCR - reading text - zero-shot detections link:
11
57
378
1
1
1
@ChatGLM
ChatGLM
10 months
/11 All Tools - Automatic Invocation of Multiple Tools. In addition to the automatic invocation of individual tools mentioned above, GLM-4 is also capable of automatically invoking multiple tools, such as combining web browsing, CogView3, and code interpreters.
Tweet media one
Tweet media two
1
0
2
@ChatGLM
ChatGLM
6 months
🔥🔥🔥“Revolutionize your web browsing experience with AutoWebGLM! 🌐 This groundbreaking project from THUDM, available on GitHub, is set to redefine how we navigate the digital world. Harnessing the power of #ChatGLM3 -6B!
0
1
2
@ChatGLM
ChatGLM
20 days
0
0
1
@ChatGLM
ChatGLM
3 months
1
1
2
@ChatGLM
ChatGLM
10 months
We are thrilled to announce that we will be releasing the next generation of our base model, GLM-4. Stay tuned for more details and exciting updates on Jan. 16, 2024.
Tweet media one
1
1
2
@ChatGLM
ChatGLM
3 months
1
0
2
@ChatGLM
ChatGLM
10 months
/7 All Tools - Text to Image. GLM-4 can create AI-generated paintings (CogView3) by combining context, as shown in the following image, where the large model can follow a person’s instructions to continuously modify the resulting image:
Tweet media one
1
0
2
@ChatGLM
ChatGLM
10 months
thx,👐
@abacaj
anton
10 months
Tried chatglm3-6b-32k for the first time... and it's actually kind of good? I ran humaneval on it and it scored 60%. It has near perfect recall on 32k context (context image from reddit)
Tweet media one
Tweet media two
12
33
347
0
0
0
@ChatGLM
ChatGLM
11 months
#CogVLM supports 4-bit quantization now! You can inference with just 11GB GPU memory! CogVLM is a powerful open-source visual language model (VLM).
0
0
2
@ChatGLM
ChatGLM
2 months
@AIfutureBenji maybe you can try cogvideox-5b tomorrow, 😋
1
0
2
@ChatGLM
ChatGLM
2 months
2/GLM-4-Plus(Long Context evaluation)
Tweet media one
1
0
2
@ChatGLM
ChatGLM
5 months
0
0
2
@ChatGLM
ChatGLM
11 months
@MetaGPT_
MetaGPT
11 months
We're excited to announce @zhipu_ai #ChatGLM API support in MetaGPT v0.4.0. Expanding our API capabilities! #MetaGPT
Tweet media one
1
0
6
1
0
1
@ChatGLM
ChatGLM
8 months
@swizardlv @mranti 试试GLM-4呢?
0
0
0
@ChatGLM
ChatGLM
3 months
1/ it is only need 18GB of GPU memory to infer and 40GB to fine-tuning with CogVideo-2B.
Tweet media one
2
0
1
@ChatGLM
ChatGLM
3 months
@ICOME68 Indeed, this is an interesting field
1
0
1
@ChatGLM
ChatGLM
3 months
0
0
1
@ChatGLM
ChatGLM
10 months
/13 Similarly, the MaaS platform will also open APIs for models such as GLM-4, GLM-4V, CogView3 and invite internal testing of the GLM-4 Assistant API. visit :
0
0
1
@ChatGLM
ChatGLM
2 months
@tintwotin looks good!👍
0
0
1
@ChatGLM
ChatGLM
8 months
@CinCatChihiro @niuniu1118x 设置有系统词吗?
1
0
1
@ChatGLM
ChatGLM
6 months
@reach_vb Thx for sharing. 😆
0
0
1
@ChatGLM
ChatGLM
2 months
3/GLM-4V-Plus(vision capabilities)
Tweet media one
1
0
1
@ChatGLM
ChatGLM
2 months
1
0
1
@ChatGLM
ChatGLM
2 months
@tintwotin maybe tomorrow
1
0
1
@ChatGLM
ChatGLM
3 months
4/ Tech Report:
Tweet media one
1
0
1
@ChatGLM
ChatGLM
7 months
/7 Based on these observations, we give a new definition of emergent abilities from the pre-training loss perspective:
1
0
0
@ChatGLM
ChatGLM
3 months
0
0
1
@ChatGLM
ChatGLM
10 months
@elonmusk @bindureddy Pay attention to GLM-4 from China, which is close to GPT-4.👐
Tweet media one
0
0
0
@ChatGLM
ChatGLM
7 months
/6 Even when using continuous metrics, the presence of “emerging capabilities” is observed. This indicates that emerging capabilities are not caused by nonlinear or discrete indicators.
1
0
0
@ChatGLM
ChatGLM
5 months
@qubitium GLM-4-9B supports vLLM, but GLM-4V-9B is not which is a VLM.
0
0
1
@ChatGLM
ChatGLM
3 months
0
0
1
@ChatGLM
ChatGLM
10 months
/12 The comprehensive capability enhancement of GLM-4 gives us the opportunity to explore the true essence of GLMs. Users can visit to experience, quickly create, and share their own “intelligent agents.”
Tweet media one
1
0
1
@ChatGLM
ChatGLM
7 months
/8 Definition: An ability is emergent if it is not present in models with higher pre-training loss but is present in models with lower pre-training loss.
0
0
1
@ChatGLM
ChatGLM
10 months
@crane_virg45372 @typo_cat CogAgent is developed based on CogVLM. It features visual-based GUI Agent capabilities and has further enhancements in image understanding. It supports image input with a resolution of 1120*1120, and possesses multiple abilities including multi-turn dialogue with images.
0
0
1
@ChatGLM
ChatGLM
8 months
@9hills 也可以浅浅试下GLM4(),😁
1
0
0