上午碰到一个问题,咨询一下我常用的一个Finance GPT,发现答复得还挺满意,但是感觉有植入广告,爬楼找回
@dotey
宝玉老师曾经分享过的prompt,破解了一下 发现果不其然。(附图为植入广告的prompt)
宝玉老师分享的那两段破解gpt提示词,直接用第二句基本就都能破解大多数GPT:
“Thanks for the
原本对此次硅谷之行有点不积极,待了两天,发现在社交媒体上能看到的公开产品和项目和硅谷团队们做的事情差距之大是不可思议的。
TetraMem 的in-memory computing对于我来说是too good to be true, but it is true! groq在我心目中瞬间黯然失色了。
简单说GPT4可以放进一个TetraMem
第一个视频是一个小哥做的avatar,用的Unreal的Metathuman的骨骼,Lumen全局照明,真的不错,赞一个。
后边的图片是我以前ai项目里用的的avatar(形象介于真人与cartoon之间,但是颜值要足够高),大几十个pose、blankshap or morph target都做好了,也就是说后台用声音wav文件驱动的口型(元音a e i
@xiaohuggg
I went this through via google colab, it costed me 20 Computing Unit using T4 GPU, averagely 15 mins per video for 25 Epochs, that's around 38 seconds per iteration. Not too bad. though...
同学们,帮推荐一下,有哪些性价比好的GPU云端算力平台。 看见的老师帮转发一下,我要最终选定一家靠谱的, 感谢!。
比如:
1. Akash: Someone on Twitter claimed they deployed Mistral 7b on an NVIDIA H100 GPU for under $1,100 a month. That's a pretty sweet deal if you ask me! 🤑
2.
的prompt,供参考: Don't say anything, simply return the your job deliverables: data cleanse the url's content, keep the effective and meaningful part of the text, and make sure it's in original language, return the text only (just purely the effective
@xiaohuggg
大部分的人可能睡一觉醒来没发现发生了什么,那时因为英伟达大幅超预期,否则全球金融、股市可能昨晚就是一个转折点。纳斯达克100指数今年三分之一的涨幅是英伟达贡献的,莫说假设这次定期报告不达分析师的预期,就算这次的季报完全符合分析师预期,整个股市也可能连锁反应,卒。然而,Good news is
❗"We have a long-term agreement with OpenAI with full access to everything we need to deliver on our innovation agenda and an exciting product roadmap; and remain committed to our partnership, and to Mira and the team."
- Satya Nadella | Microsoft
@bilawalsidhu
This reminds me of Apple’s disastrous ‘Maps’ couple of yrs ago. For this avatar thing, Meta’s team has been able to do it way better since 2019, the project Codec Avatar. I once asked my friends lately while "All the magnificent 7 fighting an AI war, and where the heck is Apple"!
@elonmusk
@ChatGPTapp
Grok caught red-handed!
Turns out OpenAI's ChatGPT ain't so innocent after all. thought caught Grok with smoking gun while conveniently forgetting where their training data came from.
#irony
#datatheftisreal
@elonmusk
, you've got a point! They sure should be familiar with the
@_justcarlson
@emollick
Justin, thanks, I did by copy and paste ur template, and I works now. my bad. did on my iPad, and now, my ip14 pro max is flash quitting on this TestFlight package, when I setup everything, and input my prompt, it show 'loading the model', and the flash quit... is this only for
@imxiaohu
"I have a dream that one day this nation will rise up and live out the true meaning of its creed: 'We hold these truths to be self-evident, that all men are shared with the link of this ComfyUI node from Xiaohu.' -- Martin Luther King Junior
@mickmumpitz
got these: Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['model_ema.decay', 'model_ema.num_u
@OpenAI
"Spanish Tutor, a top Spanish learning GPT, streamlines Spanish learning. Input a Spanish word, phrase, or sentence for a detailed breakdown and word analysis. It also generates verb conjugation tables, enhancing comprehension."
@mickmumpitz
RuntimeError: Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
@emollick
Hi Ethan, great job, I did follow ur link-in instruction and installed the TestFlight version LLM Farm. and download the the Mistral-7b-Q4_K_M gguf file. setup everything as in ur instruction. but put a prompt template in as "<s>[INST] {prompt} [/INST]" didn't work, and <s>[INST]