David M. Comfort Profile Banner
David M. Comfort Profile
David M. Comfort

@DavidmComfort

2,353
Followers
3,317
Following
2,153
Media
13,057
Statuses

Scientist (D.Phil. in Biochemistry at Oxford ), computational biology, Data Scientist, Machine Learning and Generative AI Engineer, AI Filmmaker

West Hollywood, CA
Joined March 2010
Don't wanna be here? Send us removal request.
Pinned Tweet
@DavidmComfort
David M. Comfort
2 months
A new trailer for a film I am working on - "The Lost Prince - Arthur's Last Stand" SOUND ON I hope to make an entire feature-length film in the coming months. The list of tools I used I used is in the comments - I mainly used @midjourney and @LumaLabsAI 's Dream Machine.
8
8
61
@DavidmComfort
David M. Comfort
1 year
@themaxburns What has happened to the NYT?? If one wants a solid understanding of current events in a systematic, contextual and in-depth perspective, the NYT is pretty away from providing such an understanding.
33
40
767
@DavidmComfort
David M. Comfort
1 year
@themaxburns I look at the web front page of the NYT, I think, ok, I can skip that story, and skip that one, and so on until I almost reach the bottom of the page
14
15
513
@DavidmComfort
David M. Comfort
9 months
Here is my latest revision of "The Lost Prince - Arthur and the Woods of Avalon" using generative AI tools, @pika_labs 1.0 for image-to-video and @midjourney for text-to-image. I also used @elevenlabsio for voice-over. Generative AI has come a long way and will only get better
18
36
409
@DavidmComfort
David M. Comfort
1 month
And yet another comparison for image-to-video. This time I'm looking at how the different platforms handle special effects scenes - this time car chases . The comparison is between Runway Gen-3, Luma Labs, and Kling professional mode. LET ME KNOW WHAT YOU THINK IN THE
10
23
178
@DavidmComfort
David M. Comfort
1 month
Here is a quick comparison for image-to-video between (1) Luma Labs Dream Machine @LumaLabsAI , (2) @runwayml 's new image-to-video Gen-3, and (3) @Kling_ai image-to-video. I used pretty simple prompts for all of them. Let me know what you think. I'll continue to do more
12
25
165
@DavidmComfort
David M. Comfort
6 months
Here is a quick test of @pika_labs using Lip Sync. I generated the images using @Midjourney , created the original videos in @runwayml and then used Pika to lip sync an audio clip created using @elevenlabsio
28
26
138
@DavidmComfort
David M. Comfort
3 months
8
16
113
@DavidmComfort
David M. Comfort
7 months
THere is an AI video I did for the Hackathon for my company, demonstrating what I can be done for AI video. Used Midjourney, @runwayml and @pika_labs and @D_ID_
15
10
82
@DavidmComfort
David M. Comfort
4 months
A new short film exploring different cinematic styles. I am working on a feature-length film , 'The Lost Prince - Arthur and the Woods of Avalon" and trying out different cinematic styles. Images were created in @midjourney , image-to-video clips were created in @runwayml and
16
8
71
@DavidmComfort
David M. Comfort
5 months
I'm working on a new project and I wanted to test the combo of @suno_ai_ music and @runwayml lip sync to see how well it works. This was a really quick test.
11
8
71
@DavidmComfort
David M. Comfort
1 year
@Rufus87078959 Looks good, Rufus. Here is the latest iteration of the trailer for "The Lost Prince"
42
2
67
@DavidmComfort
David M. Comfort
1 month
You can make a character smile in both Luma Labs and Runway Gen-3 and Kling AI (but watch the movement of the characters in the background) with image-to-video prompt was "smiling young man in medieval tavern" First Luma Labs
8
4
67
@DavidmComfort
David M. Comfort
1 month
Here's another comparison between Luma Labs, Runway Gen-3, and Kling AI. This time for cinematic shots. Music was created in @udiomusic SOUND ON Let me know what you think. Which tool do you prefer?
5
16
68
@DavidmComfort
David M. Comfort
9 months
@nickfloats Great comparison Nick! One challenge is going to be translating v5.2 prompts to v6. I’m in the middle of creating a story using MJ images and I have thousands of prompts which are tied to specific compositions. I need to “translate” them all
6
3
61
@DavidmComfort
David M. Comfort
1 year
@mreflow I managed to get Code Interpreter to rotate, change and brightness and interpolate between images, as well as added an audio track, by generating python code and then plugging it into Jupyter notebook (the images are from Midjourney)
5
8
61
@DavidmComfort
David M. Comfort
7 months
@bennash Here's a quick video I did of it
2
11
53
@DavidmComfort
David M. Comfort
2 months
@ezraklein “Riding with Biden”
2
4
46
@DavidmComfort
David M. Comfort
5 months
Here is my latest short film, "Across the Cosmos" - one possible future of human space exploration across the Solar System and Beyond... I was inspired by "Wanderers" by Erik Wernquist, and the "Pale Blue Dot" by Carl Sagan. I used @midjourney for all of the images, @runwayml
10
8
47
@DavidmComfort
David M. Comfort
24 days
Luma Labs again
3
5
45
@DavidmComfort
David M. Comfort
17 days
Luma Labs image-to-video certainly seems improved even though I thought Luma 1.5 was supposed to mainly for text-to-video
5
4
44
@DavidmComfort
David M. Comfort
8 months
Here is my latest AI-powered History documentary, "Battle of Midway" using @midjourney , @pika_labs , @runwayml and @Magnific_AI
12
10
42
@DavidmComfort
David M. Comfort
7 months
Here is a method to get reasonably good results for a consistent character and then placing the character in different settings. Not perfect but works The formula is [text prompt of setting] + [text prompt of character] + [style ref of character] + [style ref of setting]
9
4
42
@DavidmComfort
David M. Comfort
11 months
@pika_labs does a good job of creating videos from people on horseback and generating a lot of movement
3
7
41
@DavidmComfort
David M. Comfort
6 months
Testing AI Lip Sync tools - @runwayml , @pika_labs , @syncdotso (Sync Labs), and @D_ID_ . For Runway, I also tested out different zoom levels. Runway is nice because it extends the video for you. Sync Labs apears to loop the video in order to extend it), D-iD doesn't allow video
10
7
40
@DavidmComfort
David M. Comfort
8 months
2
3
42
@DavidmComfort
David M. Comfort
4 months
I am experimenting with recreated the styles of well-known cinematographers that could be used a style reference in Midjourney (using --sref [reference images]). I tried to generate several different types of shots. First, I am presenting the cinematographer and then the shots
Tweet media one
8
4
40
@DavidmComfort
David M. Comfort
2 months
@harryjsisson Biden is hoarse, but Trump basically lies about everything
43
2
38
@DavidmComfort
David M. Comfort
1 year
@chaseleantj Really Nice. Using a similar technique, I got it generate a video montage of images
6
1
37
@DavidmComfort
David M. Comfort
3 years
@AlexBMorse Great to hear. I’m really looking forward to visiting Provincetown for Carnival Week in two weeks time. Vaccines work and are highly effective in preventing symptomatic Covid. As a scientist (PhD in Biochemistry, Oxford), I think the CDC jumped the shark
7
1
35
@DavidmComfort
David M. Comfort
6 months
At the @t2remake premiere! The first AI feature length film
Tweet media one
4
6
37
@DavidmComfort
David M. Comfort
5 months
I wanted to thank @CuriousRefuge for adding my short film, "Across the Cosmos" to their AI Gallery
9
2
31
@DavidmComfort
David M. Comfort
4 months
Here's a test of using a cinematographic style across a lot of establishing / background shots. My apologies for repetitive shots. The idea was to really push camera motion as much as I could without losing too much coherence and maintaining consistent lighting. I have a
7
4
32
@DavidmComfort
David M. Comfort
10 months
@drvolts It was pretty good when Biden was elected too
2
1
33
@DavidmComfort
David M. Comfort
1 year
@jbouie The Supreme Court treats the Constitution like a Ouij Board
1
5
29
@DavidmComfort
David M. Comfort
8 months
A short educational video on Columbus and the Discovery of America using various AI tools such as @pika_labs
5
7
31
@DavidmComfort
David M. Comfort
4 months
Tonight I’m attending @RunwayML ’s AI Film Festival in LA. I'm really excited to see everyone in person
7
1
30
@DavidmComfort
David M. Comfort
5 months
"The Secret Forest - A Story of Guinevere" - my new short film. I created it using @suno_ai_ , @runwayml and @midjourney
2
4
27
@DavidmComfort
David M. Comfort
9 months
@icreatelife My trailer for "The Lost Prince - Arthur and the Woods of Avalon"
4
1
29
@DavidmComfort
David M. Comfort
3 months
1
3
27
@DavidmComfort
David M. Comfort
6 months
@pika_labs Here's one I just did
5
0
28
@DavidmComfort
David M. Comfort
2 months
@zachdcarter The Supreme Court needs to be reformed (along with the Electoral College, the Senate, etc.), but it’s hard to see how this would happen
3
1
27
@DavidmComfort
David M. Comfort
1 year
1
1
23
@DavidmComfort
David M. Comfort
5 years
1
1
25
@DavidmComfort
David M. Comfort
2 years
I've written a Medium post on "Lighting Techniques in #Midjourney — Volumetric Lighting — Smoke and Fog" #AIart #AIArtwork #aiartcommunity
Tweet media one
3
5
25
@DavidmComfort
David M. Comfort
1 year
@Rufus87078959 Looks good, Rufus! You should try out pika labs image-to-video as well. (And Runway Gen-2)
13
0
23
@DavidmComfort
David M. Comfort
4 months
@icreatelife I establishing shots using a cinematic style
3
1
25
@DavidmComfort
David M. Comfort
1 year
@Rufus87078959 Ignore the haters, Rufus. Best to mute and block them. They just want to ridicule and fight scientific and technological progress.
12
0
22
@DavidmComfort
David M. Comfort
2 months
@StevenGlinert I’m in Tech and no way I’m on the Trump train. It’s just going to be 4 years of chaos and social / political unrest and economic uncertainty.
2
0
24
@DavidmComfort
David M. Comfort
5 months
Songs from "The Lost Prince - Arthur and the Woods of Avalon" The inages were created using @midjourney , image-to-video was done using both @runwayml Gen-2 and @pika_labs , the songs were created using @suno_ai_ , the lip sync was done using D-id studios ( @D_ID_ ), and some of
6
5
23
@DavidmComfort
David M. Comfort
2 months
@zachdcarter The Democratic leadership really needs to stand up to Trump now and do everything we can to win, or else we face 4 years of chaos, tears and anger.
8
3
23
@DavidmComfort
David M. Comfort
11 months
@nickfloats Here is a comparison between Topaz 4X vs. MJ's 4X upscaler (Topaz on Left, MJ on Right). Topaz is clearly better.
Tweet media one
5
1
23
@DavidmComfort
David M. Comfort
1 year
@PurzBeats @pika_labs I've been experimenting with @pika_labs as well and have come up with a rough walk cycle. I upscaled the videos and images during each iteration and used the prompt "walking away -motion 1 -camera zoom in"
2
3
23
@DavidmComfort
David M. Comfort
2 years
@ClimateHuman I just can’t understand why anyone would support these actions. These paintings are great cultural patrimony of all humanity and to deface them in misguided protest is abhorrent. It is a totally false choice to state u have to choose between either art and climate disaster
2
0
21
@DavidmComfort
David M. Comfort
16 days
Not quite what I was looking for but a cool shot from Luma Labs
1
2
22
@DavidmComfort
David M. Comfort
3 months
@justin_hart @LumaLabsAI You can do a three-panel story using @LumaLabsAI . Somewhat mixed results. I simply stitched together three images from Midjourney (one has to click in the video to see all three panels)
2
0
21
@DavidmComfort
David M. Comfort
5 months
I'm working on a new project - "Tales of Old"
5
2
20
@DavidmComfort
David M. Comfort
1 year
How to change the clothes on a consistent character in #Midjourney . (1) Create a character, (2) Create the Clothes by itself, (3) Use the image prompts from 1 & 2, and pre-fix "wearing [the clothing]" to a combined prompt!
Tweet media one
Tweet media two
Tweet media three
4
5
20
@DavidmComfort
David M. Comfort
5 months
"A World So Bold" I created it using @udiomusic , lip sync using @runwayml . images using @midjourney
3
3
20
@DavidmComfort
David M. Comfort
6 months
At the ⁦ @t2remake ⁩ in LA
Tweet media one
1
1
20
@DavidmComfort
David M. Comfort
1 month
Here's a quick scene using @LumaLabsAI image-to-video. I used ending keyframe images on some of the clips and used camera movements in the prompts (SOUND ON)
2
3
18
@DavidmComfort
David M. Comfort
1 month
@Kling_ai I think Runway Gen-3 and Luna Labs are pretty comparable, Gen-3 appears to throw in a lot of artistic artifacts though. Hopefully this puts pressure on Luma Labs to have an unlimited plan
3
0
19
@DavidmComfort
David M. Comfort
4 months
well, I thought I was able to get a character walking using @runwayml Gen-2, but then the character decided he didn't want anything to do with it
5
1
18
@DavidmComfort
David M. Comfort
3 months
@karpathy @LumaLabsAI Luma does amazingly well. I put this together today
0
1
19
@DavidmComfort
David M. Comfort
5 months
It looks like you can save your set-up for prompts, motion brush and camera controls as presets in @runwayml now!
Tweet media one
1
4
19
@DavidmComfort
David M. Comfort
3 months
I'm working a new project. Here is a teaser trailer. I am hoping a have ten-minute action sequence. This trailer is mainly establishing shots while I work on the whole sequence.
3
3
19
@DavidmComfort
David M. Comfort
1 year
"The Lost Prince - Arthur and the Woods of Avalon" - Trailer 1 Story: David Comfort AI Video: @pika_labs & @runwayml Gen-2 AI Images: @midjourney AI Upscaling: @topazlabs Editing: @capcutapp Music: Winning Elevation - @pixabay It was a lot of work and a lot of fun to make.
6
3
16
@DavidmComfort
David M. Comfort
3 months
"Across the Cosmos" - A journey to the future of human space exploration. I recut by short doc using mainly @LumaLabsAI 's video model. Images: @midjourney Images to Video: @LumaLabsAI , @runwayml Gen-2, @pika_labs Music: @udiomusic and @suno_ai_
5
1
17
@DavidmComfort
David M. Comfort
11 months
Here is my latest revision for the trailer for "The Lost Prince: Arthur and the Woods of Avalon." I used Midjourney mostly (along with DALLE-3 for a few figures) and @Pika_Labs for image-to-video.
5
5
18
@DavidmComfort
David M. Comfort
5 months
@icreatelife "Guinevere's Lament - Do not forget me in the Night" - really an experiment to see how well RunwayMl can do Lip sync for singing. I used @suno_ai_ for the music and voice. I wrote the lyrics.
2
1
17
@DavidmComfort
David M. Comfort
1 year
@chaseleantj Great work Chase! I’ll try this method out. Inpainting works really well for consistent characters too. I wrote up a Medium post on how to do it.
Tweet media one
Tweet media two
Tweet media three
4
3
17
@DavidmComfort
David M. Comfort
21 days
Nice handheld shot using Kling
1
0
18
@DavidmComfort
David M. Comfort
11 months
@icreatelife My latest trailer for "The Lost Prince: Arthur and the Woods of Avalon"
1
1
17
@DavidmComfort
David M. Comfort
21 days
Here's another good shot using Kling AI image-to-video
3
2
17
@DavidmComfort
David M. Comfort
2 months
Here is a comparison for an animated character image-to-video creation using @LumaLabsAI , @pika_labs , @runwayml Gen-2, @HaiperGenAI , and @Kling_ai , as well as @runwayml Gen-3 (text-to-video). Here is @LumaLabsAI :
6
4
17
@DavidmComfort
David M. Comfort
2 months
A re-make of a short film, "The Lost Prince - Arthur's Last Stand." The images are from @midjourney . For Image-to-Video I primarily used @LumaLabsAI , but some clips are from @runwayml Gen-2 and @pika_labs . I used Gen-3 for opening and ending titles. The music is created using
4
2
17
@DavidmComfort
David M. Comfort
1 year
@midjourney You can add characters to an image by panning and adding character description to the prompt.
Tweet media one
Tweet media two
0
2
16
@DavidmComfort
David M. Comfort
2 months
I’m testing @LumaLabsAI image-to-video
3
3
17
@DavidmComfort
David M. Comfort
4 months
Here is another quick experiment with establishing shots, trying to get movement out of characters. I am finding that you need to crank up the motion to 10 to get characters to move using Runway's Gen-2 motion brush. Images created using @midjourney . I used a cinematic style
2
4
17
@DavidmComfort
David M. Comfort
2 months
@CodeByPoonam Another experiment with @LumaLabsAI , using a three-panel layout, with three clips stitched together using their new keyframe method (click on video to see all of the panels.) With a little more work, it make a really cool effect.
1
1
16
@DavidmComfort
David M. Comfort
24 days
Experimenting with Luma Lab's keyframes
2
1
15
@DavidmComfort
David M. Comfort
2 years
@nickfloats I’ve managed to get consistent characters in interior scenes using a combination of seeds and image prompts. See I can also get a consistent character along with another character in a setting but haven’t written this up yet. It is tricky.
2
0
16
@DavidmComfort
David M. Comfort
1 year
My latest revision of the AI Trailer for "The Lost Prince: Arthur and the Woods of Avalon" using @pika_labs and @midjourney and @runwayml
4
2
16
@DavidmComfort
David M. Comfort
3 months
@LumaLabsAI It works amazingly well
1
0
16
@DavidmComfort
David M. Comfort
2 years
@ClimateHuman @ClimatePsych I hate, I mean hate these actions. It is really lazy activism that accomplishes nothing and potentially damages priceless art, I’ve been doing activism for 20+ years & having effective strategy is hard. These activists need to think long and hard and come up w/ better actions.
2
0
16
@DavidmComfort
David M. Comfort
1 month
Here is my latest short featuring scenes from the "The Lost Prince" which take place in"The Forest Primeval" "Fáinne geal an lae" means "The bright dawn of day." Image-to-Video clips were created in Luma Labs Dream Machine @LumaLabsAI I would try to hone some of the video
4
2
15
@DavidmComfort
David M. Comfort
1 year
@Noahpinion This is pretty astounding: “The eurozone economy grew about 6% over the past 15 years, measured in dollars, compared with 82% for the U.S., according to International Monetary Fund data.”
3
0
15
@DavidmComfort
David M. Comfort
6 months
Here is a quick demo of sound effects using @pika_labs . Sometimes it just doesn't work but it is in beta.
0
1
15
@DavidmComfort
David M. Comfort
1 year
@TurkMatthew @LangChainAI Actually this evening I got a pandas data agent working with Chat agent executor. So I can do Q&A with a CSV data file. No need to do embeddings or use a vector store. Added bonus is that you can view the thought process of the agent. Now to add memory & more tools
1
0
16
@DavidmComfort
David M. Comfort
6 months
Here is a quick demo using Consistent Characters in @midjourney , Lip Sync in @pika_labs , Sound Effects Using @elevenlabsio and Video using @runwayml Gen-2. Lots of tools but pretty quick to put it all together. I think the weak link is Lip Sync
3
1
15
@DavidmComfort
David M. Comfort
22 days
My first experiment using Hedra's new Lip Sync model. Definitely a step up!
1
0
15
@DavidmComfort
David M. Comfort
1 month
@Kling_ai @pika_labs Here’s a trailer I did using Lima Labs
3
1
15
@DavidmComfort
David M. Comfort
1 year
@icreatelife Vary (Strong) is really good. Just use an image prompt, add background prompt and image weight of day 0.25, and change aspect ratio. Here’s a medium article about it
Tweet media one
Tweet media two
Tweet media three
3
1
15
@DavidmComfort
David M. Comfort
10 years
Solar Impulse Pilots Get Ready for Sun-Powered Flight Around World http://t.co/YIKnoxupMk @solarimpulse http://t.co/Qnu3HPifgK
Tweet media one
0
11
15
@DavidmComfort
David M. Comfort
2 months
@alexiskold @bhorowitz @pmarca Whether to support Trump or not is a basic test of character and judgement and a lot of people of failing that test (by supporting him). Disappointing and leads one to question their judgement about everything.
4
0
14
@DavidmComfort
David M. Comfort
8 years
Could Alzheimer’s Stem From Infections? It Makes Sense, Experts Say
0
0
11
@DavidmComfort
David M. Comfort
3 months
@justin_hart @LumaLabsAI Interesting. I’ll have to try it, luma does a good job with old photos too
3
1
14
@DavidmComfort
David M. Comfort
6 months
Here is a first test of Character Consistency in @midjourney using --cref. The first image is reference and I used default value for --cw
4
1
14
@DavidmComfort
David M. Comfort
2 months
Gazing at the sky and stars above us...
4
1
14