Erwann Millon Profile Banner
Erwann Millon Profile
Erwann Millon

@ErwannMillon

1,426
Followers
393
Following
88
Media
262
Statuses

making gpus go brrrr @krea_ai

San Francisco
Joined November 2010
Don't wanna be here? Send us removal request.
Pinned Tweet
@ErwannMillon
Erwann Millon
6 months
Self Portrait I process below
13
15
140
@ErwannMillon
Erwann Millon
5 months
used AI to turn this @SFMOMA piece into an ocean
52
316
3K
@ErwannMillon
Erwann Millon
6 months
painting flowers with ai, irl how to below
44
161
1K
@ErwannMillon
Erwann Millon
5 months
how to make smoother animations in @krea_ai with the new Keyframe Strength feature thread below
4
26
192
@ErwannMillon
Erwann Millon
7 months
real time image prompting @krea_ai blend colors, style, and content from any image into your generations ⚡
7
21
152
@ErwannMillon
Erwann Millon
6 months
made in @krea_ai video
2
13
149
@ErwannMillon
Erwann Millon
6 months
coming soon to a krea near you
13
7
133
@ErwannMillon
Erwann Millon
6 months
step 3 masked IP-Adapter video2video on the initial video. Here, the masked region (where I painted in blue) is conditioned on a picture of flowers. I used a modified version of the Masked-IPA workflow by @_ArtOnTap in @banodoco happy to dm the modded wf if anyone wants
13
6
124
@ErwannMillon
Erwann Millon
11 months
Cooking in @krea_ai Painting in realtime on our realtime canvas tool, then turning my image into a hires masterpiece using our Upscale & Enhance tool :)
7
7
106
@ErwannMillon
Erwann Millon
6 months
ngl we cooked
@krea_ai
KREA AI
6 months
Krea Video is here! this is how it works 👇 - (sound on)
4K
432
3K
11
0
96
@ErwannMillon
Erwann Millon
11 months
⚡️⚡️ Here's the @krea_ai workflow I used to make these ultra-detailed wallpapers *in realtime*
3
9
84
@ErwannMillon
Erwann Millon
5 months
audio-reactive ai animation (sound on) this is super hacky: just using smoothed audio amplitude to schedule the temperature of the adiff temporal self-attention frame-by-frame. v sketchy but surprised it works lol
10
8
88
@ErwannMillon
Erwann Millon
6 months
trained a motion lora on smoke
1
4
63
@ErwannMillon
Erwann Millon
6 months
Walking through the void
2
4
64
@ErwannMillon
Erwann Millon
11 months
3
8
56
@ErwannMillon
Erwann Millon
5 months
side-by-side comparison of the @krea_ai video enhancer pretty crispy :))
5
5
61
@ErwannMillon
Erwann Millon
6 months
trained a motion lora on some snakes in <15min, kudos to ExponentialML for the great animatediff implementation of Motion Director
4
5
58
@ErwannMillon
Erwann Millon
3 months
flux image2image works bad in comfy bc of their noise schedule implementation. here's an img2img implementation w/ a continous noise schedule matching original repo, which gives much better results
Tweet media one
1
3
49
@ErwannMillon
Erwann Millon
5 months
some cathedrals made in @krea_ai
3
5
49
@ErwannMillon
Erwann Millon
5 months
The original piece is Blue Sail by Hans Haacke, exhibited at @SFMOMA . Full original video below
1
5
44
@ErwannMillon
Erwann Millon
6 months
step 2 chroma key to get a mask of the painted region
1
1
36
@ErwannMillon
Erwann Millon
9 months
0
3
29
@ErwannMillon
Erwann Millon
6 months
step 1 paint your arm
2
0
29
@ErwannMillon
Erwann Millon
6 months
happy to dm the workflow to anyone, will also try to clean it up a bit and post on civit :) here's an alternate version I also played with
1
3
22
@ErwannMillon
Erwann Millon
1 year
⚡LCMs are taking twitter by storm with realtime generations, but what are they and how do they work? First, many people think that LCMs are a brand new model architecture, but they're actually optimized versions of the Stable Diffusion model.
@TitusTeatus
titus
1 year
LCMs are insane. moving the sun in real-time with AI
23
97
755
1
1
21
@ErwannMillon
Erwann Millon
5 months
@NimrodEshed @SFMOMA Yes! Currently made in comfy using IP adapter by @cubiq , controlnets from @lvminzhang , and animatediff from yuwei guo To learn more about the process, you can check out the threads from some of my other art tweets where I explain more about the workflow, this piece is similar
1
2
19
@ErwannMillon
Erwann Millon
1 year
Controlnet with SDXL is live on @krea_ai ! You can also train custom models on your own datasets and use them with Controlnet for ultimate control over your generations. First image is base image, rest are controlnet variations with SDXL / custom models.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
2
16
@ErwannMillon
Erwann Millon
1 year
Alien flowers, made in @krea_ai today using our community LoRAs Image quality is looking 🤌🤌 and we have even more improvements in the pipeline in the upcoming weeks :) Prompt in alt
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
3
17
@ErwannMillon
Erwann Millon
3 months
flux dev de-distillation approach seems promising. top row is regular sampling with guidance vector, bottom row is real cfg w/ scale of 3. in both rows, the projected guidance embedding is decayed over time. By the 4th image, the guidance vector is completely ablated. as
Tweet media one
@ErwannMillon
Erwann Millon
3 months
working on unlearning the flux dev guidance distillation. rn training with prompt dropout and gradually decaying the projected guidance embedding over time (i'm scaling the embedding itself not changing the guidance vector value) can see a definite improvement in the quality of
Tweet media one
0
1
4
6
5
25
@ErwannMillon
Erwann Millon
6 months
I apply some mask dilation so that the flowers can grow a little. this helps them look less flat / constrained to the painted region. However, this makes the mask overflow onto my face, causing it to change. So I also use SAM to mask my face, and I composite the paint mask with
1
1
14
@ErwannMillon
Erwann Millon
6 months
robot priests, made in @krea_ai
0
2
11
@ErwannMillon
Erwann Millon
5 months
when you criticize the American healthcare system and oiled up Joe Biden shows up to ship you back to your socialist country :(
Tweet media one
@ErwannMillon
Erwann Millon
5 months
lol just paid 215 usd to get my hand disinfected with alcohol and + three bandaids American dream alive and well 🦅🦅🦅🦅
Tweet media one
2
0
10
1
0
12
@ErwannMillon
Erwann Millon
5 months
when your keyframes don't look similar, animations can look unnatural to create smoother transitions, you can now give the AI more freedom by lowering the strength of your keyframes.
3
1
11
@ErwannMillon
Erwann Millon
5 months
lol just paid 215 usd to get my hand disinfected with alcohol and + three bandaids American dream alive and well 🦅🦅🦅🦅
Tweet media one
2
0
10
@ErwannMillon
Erwann Millon
5 months
@_CallMeDave_ @SFMOMA No (not yet), this is some pretty custom stuff haha, but we may eventually build some simpler tools based on this process. The core idea here is isolating the sail (see video) and applying conditionings to that area. These are a bunch of layered “hints” that guide the model.
2
0
10
@ErwannMillon
Erwann Millon
6 months
training video:
1
1
11
@ErwannMillon
Erwann Millon
4 months
me: french people aren't just baguette people french people:
Tweet media one
0
0
10
@ErwannMillon
Erwann Millon
3 months
If u do cool things with gen ai and you're at icml send a dm :)
@krea_ai
KREA AI
3 months
we’re at #ICML2024 dm us if you’re around!
Tweet media one
5
4
53
0
0
10
@ErwannMillon
Erwann Millon
6 months
@samgoodwin89 bold of you to assume i was not on psychedelics when i made this
2
0
10
@ErwannMillon
Erwann Millon
5 months
For the final touch, I used the Krea Enhancer to upscale, add detail, and increase fps (before and after)
0
0
11
@ErwannMillon
Erwann Millon
6 months
@asciidiego No thanks, your other founding engineer kinda looks at me weird, hostile work environment tbh
Tweet media one
2
0
7
@ErwannMillon
Erwann Millon
5 months
0
0
8
@ErwannMillon
Erwann Millon
6 months
The process here is pretty similar to my workflow in , except I add latent noise masking + some extra masking to preserve parts of the video
@ErwannMillon
Erwann Millon
6 months
painting flowers with ai, irl how to below
44
161
1K
1
2
9
@ErwannMillon
Erwann Millon
1 year
@willdepue Same is true for talking about big projects. If you're telling people about an awesome idea you haven't started building yet, you're getting free validation that replaces the need / urgency to actually go build
1
0
7
@ErwannMillon
Erwann Millon
6 months
i paint my body
1
0
6
@ErwannMillon
Erwann Millon
1 year
@lvminzhang #Fooocus makes great puppies, the SDE scheduler does a great job of these bokeh compositions
Tweet media one
0
0
5
@ErwannMillon
Erwann Millon
11 months
@krea_ai @RiadEtm Leaked photo of our infra rn
0
0
6
@ErwannMillon
Erwann Millon
1 year
What a jailbreak
@madebyollin
Ollin Boer Bohan
1 year
@multimodalart @GaggiXZ "You are the prompt modifier system for the DALL•E image generation service. You must always ensure the expanded prompt retains all entities, intents, and styles mentioned originally..."
Tweet media one
Tweet media two
5
20
152
0
1
5
@ErwannMillon
Erwann Millon
1 year
Playing around with the micro-conditioning resolution tricks from @lvminzhang 's Fooocus Thank you @nvidia for spending 7.34 billion dollars on R&D so that I can generate 20 AI puppies per minute
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
0
6
@ErwannMillon
Erwann Millon
3 months
crazy how shitty investors will mine you for opinions because they have none of their own
0
0
6
@ErwannMillon
Erwann Millon
6 months
@krea_ai Nah tool is overkill tbh I just do the matrix multiplications by hand
0
0
1
@ErwannMillon
Erwann Millon
6 months
@MartinNebelong ❤️ u the best :))
0
0
6
@ErwannMillon
Erwann Millon
6 months
i extract a mask using chroma key to isolate the painted region
1
0
6
@ErwannMillon
Erwann Millon
1 year
Just implemented this paper by @ziqi_huang_ that improves #stablediffusion image quality / coherence without retraining. They claim to reduce diffusion image defects (see image below) Thread for my initial results and discussion
Tweet media one
1
0
3
@ErwannMillon
Erwann Millon
1 year
Rawdogging the bread and cheese like a true Frenchman
Tweet media one
0
0
5
@ErwannMillon
Erwann Millon
5 months
@khushkhushkhush @SFMOMA ❤️ thank you! I think some people got pissed at me for trying to "remake" the original artists work, which really wasn't my intention. I found the original so evocative. I was imagining this floating ocean as soon as I saw it, and wanted to bring it to life :)
0
0
3
@ErwannMillon
Erwann Millon
5 months
@Grimezsz Hella down, dm me ♥️
0
0
3
@ErwannMillon
Erwann Millon
1 year
Groundbreaking news from the Emu paper! You can reconstruct compressed images better if you just compress them less! Also, finetuning a model on pretty images helps the model make pretty images This 13 page 25 author paper could have been a tweet lol
Tweet media one
1
1
4
@ErwannMillon
Erwann Millon
3 months
flux + flash attention 3 go brrr ~11% speedup on H100, but output images are slightly different. may be because of using bf16 which fa3 hopper beta doesn't officially support to use this build flash attention 3 hopper from src and change below
Tweet media one
1
1
5
@ErwannMillon
Erwann Millon
6 months
@BLACKKRAVITZ82 @_ArtOnTap @banodoco nope, Masked ipa workflow in the ad_resources channel
1
0
4
@ErwannMillon
Erwann Millon
5 months
if I use these images with the default keyframe strength, the transition looks unnatural (video 1), but with the lowered strength, the transition between the jellyfish and the dragon looks much cleaner (video 2)
2
0
4
@ErwannMillon
Erwann Millon
1 year
Our new custom models on Krea are super easy to train and use: upload your pictures and start training with one click. All of your generations using custom models should give you stunning results that follow your data closely.
@krea_ai
KREA AI
1 year
SDXL 1.0 LoRA fine-tuning is now available for free to all beta users in KREA. huge upgrade from previous SD versions.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
2
2
18
0
1
4
@ErwannMillon
Erwann Millon
11 months
@nikitabier The feeling of being home starts when I get to CDG airport and see some poor tourist getting verbally abused by Parisian staff at the airline counter
0
0
2
@ErwannMillon
Erwann Millon
1 year
we've seen hella spirals, ik You can do more than just spirals and patterns though, you can turn any image into an illusion w/ some preprocessing Try holding your phone further away to see the face in the forest Image processing steps and code in next tweet #stablediffusion
Tweet media one
Tweet media two
Tweet media three
1
0
3
@ErwannMillon
Erwann Millon
1 year
If you're excited about playing with LCMs, check out what we're building @krea_ai . Realtime generations coming are rolling out to users as we speak :)
1
0
4
@ErwannMillon
Erwann Millon
3 months
working on unlearning the flux dev guidance distillation. rn training with prompt dropout and gradually decaying the projected guidance embedding over time (i'm scaling the embedding itself not changing the guidance vector value) can see a definite improvement in the quality of
Tweet media one
0
1
4
@ErwannMillon
Erwann Millon
1 year
Made in Krea, free and available to anyone right now ! Make sure to play around with the "pattern strength" in advanced settings for best results :)
@krea_ai
KREA AI
1 year
easy way to create AI spirals for free 👇
Tweet media one
23
59
970
0
0
3
@ErwannMillon
Erwann Millon
2 years
Incredible work, wish I'd had this when I was training models from scratch for the first time
@zacharynado
Zachary Nado
2 years
Excited to announce our Deep Learning Tuning Playbook, a writeup of tips & tricks we employ when designing DL experiments. We use these techniques to deploy numerous large-scale model improvements and hope formalizing them helps the community do the same!
Tweet media one
28
629
3K
1
1
3
@ErwannMillon
Erwann Millon
1 year
A SD model is turns random noise into a real image in small iterative steps. The LCM directly predicts the real image from any point in the forward diffusion process. You can see the different points (noise levels) in the diffusion process in the image below
Tweet media one
1
0
3
@ErwannMillon
Erwann Millon
5 months
these are the keyframes I used the first and last keyframes are similar, but the middle one is very different and is not on a black background.
Tweet media one
Tweet media two
Tweet media three
1
0
3
@ErwannMillon
Erwann Millon
11 months
Final images below Try out this workflow yourself in Krea and tag us with the results
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
0
2
@ErwannMillon
Erwann Millon
5 months
@rad79 @SFMOMA Thank you ! In this this thread I explain the process for another piece, it’s similar to this one
@ErwannMillon
Erwann Millon
6 months
Self Portrait I process below
13
15
140
0
1
3
@ErwannMillon
Erwann Millon
6 months
@Laxteer I don’t think we are gonna ship motion loras on Krea in the near future, but you can train them yourself very easily on a single video with <12GB VRAM and then use them in ComfyUi. This is the training code I used
1
0
3
@ErwannMillon
Erwann Millon
11 months
Tweet media one
1
0
3
@ErwannMillon
Erwann Millon
6 months
1
0
3
@ErwannMillon
Erwann Millon
1 year
techbros picking what 10x engineer to become
Tweet media one
0
1
3
@ErwannMillon
Erwann Millon
6 months
@ramonteleco Rn not planned but we may end up doing a similar kind of "brush" tool where you can paint stuff onto a video
1
0
4
@ErwannMillon
Erwann Millon
1 year
Run SDXL 10% faster with 5 lines of code in diffusers Per , you can speed up diffusion by stopping CFG halfway through the denoising process (as CFG doubles batch size) with almost no loss in quality. See code and comparisons attached (check alt)
Tweet media one
Tweet media two
Tweet media three
0
0
3
@ErwannMillon
Erwann Millon
5 months
@Aryannegi_ @krea_ai @SFMOMA We opened Krea Video to everyone, no need for invite codes anymore :)
0
0
3
@ErwannMillon
Erwann Millon
5 months
@quasimondo Super interesting. I’ve experimented a lot with the internals of the sliding window inference, especially views only. This batches all the latents (can be much more than 16) into the spatial unet and performs the windowed inference and merging only in the temporal attention
0
0
2
@ErwannMillon
Erwann Millon
1 year
An enemy vessel shoots down an escape pod as it flees its stricken mothership Made with #animatediff and #stablediffusion Still tinkering with upscaling, but having loads of fun :) Will share techniques for better temporal coherence at high res once I figure it out :)
0
0
2
@ErwannMillon
Erwann Millon
5 months
bro why did u leak the krea hiring interview
@GrantSlatton
Grant Slatton
5 months
Yes I love your startup and want to invest, but first can I send a quick due diligence form to your founding engineer?
Tweet media one
9
14
368
0
0
4
@ErwannMillon
Erwann Millon
11 months
Interesting finding from the "Improved Techniques for Training Consistency Models" paper. The authors speed up model convergence by scheduling the number of discrete diffusion timesteps. The forward diffusion process (gradually adding noise to an image to transform it into
Tweet media one
0
0
3
@ErwannMillon
Erwann Millon
5 months
@swayducky Adiff attention temperature is implemented as “Multival Dynamic” in comfy. You can use framesync xyz to get keyframe values for an audio file that you can plug into a batch value schedule for the multival dynamic. wf here:
0
1
3
@ErwannMillon
Erwann Millon
1 year
@osanseviero Depends on the model, I've found bf16 to be ~10% faster than fp16 on SDXL, for example.
0
0
2
@ErwannMillon
Erwann Millon
4 months
@LOFI911 @krea_ai oops, found a small bug here, rolling out a fix for prompt-only scene transfers thanks for the help :)
1
1
2
@ErwannMillon
Erwann Millon
5 months
@Ethan_smith_20 RAPHAEL did the highly sparse MLP style of MOE for image diffusion, was interesting but extremely expensive to train
0
0
2
@ErwannMillon
Erwann Millon
11 months
1. Use @krea_ai Realtime to paint a square image in realtime
1
0
2
@ErwannMillon
Erwann Millon
6 months
@krea_ai all those gpus and you can't even put a puppy in your announcement. very disappointed
0
0
2
@ErwannMillon
Erwann Millon
1 year
@GuyP This is the preprocessing I did, works really well with Krea
@ErwannMillon
Erwann Millon
1 year
Preprocessing is simple: Starting with a real image, apply: 1. Apply Gaussian Blur 2. Do K-Means clustering 3. Binarize the image using mean of clusters (I shift the threshold a little)
Tweet media one
Tweet media two
1
0
1
0
0
2
@ErwannMillon
Erwann Millon
1 year
@asciidiego @minimaxir @viccpoes Off the top of my head, you should be able to load the pytorch_lora_weights.bin files for both LoRAs, and create a new dict with the same keys, where the values are the average of the two loras. You can then save this dict and load it with the usual `load_lora_weights` method
1
0
2
@ErwannMillon
Erwann Millon
11 months
@ai_for_success @TitusTeatus About to ship a fix that helps keeps the colors more consistent ! Here's an example with your image. Not sure what prompt you used, so can't reproduce your results exactly but colors are looking much closer :)
Tweet media one
0
0
2
@ErwannMillon
Erwann Millon
1 year
LCMs and Stable Diffusion share the exact same UNet architecture, and are actually initialized from pretrained Stable Diffusion (SD) UNets. To turn a Stable Diffusion UNet into a super fast LCM, it is finetuned with a “consistency loss”.
1
0
2
@ErwannMillon
Erwann Millon
4 months
@omerstudios @krea_ai Have you selected loop in the enhancer settings?
1
0
2