ptrblck Profile Banner
ptrblck Profile
ptrblck

@ptrblck_de

18,614
Followers
434
Following
31
Media
1,556
Statuses

Deep learning and drums, @PyTorch engineer at @NVIDIA

California, USA
Joined April 2014
Don't wanna be here? Send us removal request.
Pinned Tweet
@ptrblck_de
ptrblck
5 years
I just posted my 10,000th reply in the @PyTorch discuss forum! Thanks everyone for creating such a great community, for the guidance and mentorship I received, and @soumithchintala for starting this journey.
91
51
2K
@ptrblck_de
ptrblck
2 years
Lots of love in this thread! You all are amazing and made my day! :)
@kaikim29
Kai
2 years
Shoutout to the legend who answers literally every single PyTorch question out there!
Tweet media one
63
197
4K
42
28
3K
@ptrblck_de
ptrblck
1 year
Thanks a ton for awarding me the @PyTorch superhero award at the conference! It really made my day, and it was great seeing so many community members in real life. You all are amazing!
Tweet media one
122
81
2K
@ptrblck_de
ptrblck
4 years
Wow, this arrived in the mail today from @NVIDIAAI . I’m touched beyond words for the kudos of Jensen and @soumithchintala . I am truly grateful to be able to work in such a unique team with unparalleled colleagues and the @PyTorch community. Thanks to all who made it possible.
Tweet media one
34
33
851
@ptrblck_de
ptrblck
5 years
This line of @PyTorch code fascinates me every time I come across it: y = x_backward + (x_forward - x_backward).detach() As @ThomasViehmann explained: "It get’s you x_forward in the forward, but the derivative will act as if you had x_backward"
10
69
459
@ptrblck_de
ptrblck
2 years
Thank you for the kind words and for being part of this enjoyable community, @ThomasViehmann ! A huge thank you to all of you who make the community and discussion forum so great! 🥳
@ThomasViehmann
Thomas Viehmann
2 years
Posting to when you are stuck with @PyTorch code not quite doing what you want, will more often than not see @ptrblck_de helping you out with patience, kindness, and unparalleled expertise. Yesterday, he reached 30 thousand replies! 🥳 Thank you so much!❤️
Tweet media one
12
19
390
19
15
420
@ptrblck_de
ptrblck
5 years
Our @PyTorch team at @nvidia is recruiting! If you love PyTorch and are interested in working in deep learning compilers, automatic code generation or the PyTorch core, please apply: Also, send me a DM, if you want to talk about the positions! :)
3
104
372
@ptrblck_de
ptrblck
5 years
Native Automatic Mixed Precision Training is available in the latest @PyTorch nightly binaries and master! No need to build apex anymore. Check out the examples:
6
71
367
@ptrblck_de
ptrblck
2 years
@kaikim29 Hahaha, thanks! 😊
11
0
351
@ptrblck_de
ptrblck
3 years
Happy Birthday, PyTorch! 🥳 🎉 How time flies!
@PyTorch
PyTorch
3 years
Today marks 5 years since the public release of PyTorch! We didn't expect to come this far, but here we're🙂- 2K Contributors, 90K Projects, 3.9M lines of "import torch" on GitHub. More importantly, we're still receiving lots of love and having a great ride. Here's to the future!
Tweet media one
61
457
3K
5
11
336
@ptrblck_de
ptrblck
6 years
Next month I'll be joining @nvidia 's @PyTorch frameworks team. Looking forward to an amazing time with a great team! :)
20
12
340
@ptrblck_de
ptrblck
2 years
If you are using @StableDiffusion and want to make it faster, check Simo's post and notebook showing how nvFuser will speed it up:
4
41
286
@ptrblck_de
ptrblck
3 years
Today, 5 years ago, @ThomasViehmann posted for the first time in the @PyTorch forum. 🥳 I am very grateful for his continuous activity and I am sure that it had a huge impact on many users (including me). A huge thank you and to the next 5 years! 🎉
7
6
257
@ptrblck_de
ptrblck
2 years
Haha, it seems ChatGPT is already on it. Look at this kind message I've received today ☺️:
Tweet media one
@jxmnop
jack morris
2 years
one day the history books will write about the role this man played in the development of AGI
Tweet media one
13
64
834
8
9
254
@ptrblck_de
ptrblck
3 years
Yay! 1.10 is out! 🎉 Check out the CUDA Graphs API and the new CUDA 11.3 binaries!
@PyTorch
PyTorch
3 years
PyTorch 1.10 is here! Highlights include updates for: - CUDA Graphs APIs updates - Several frontend APIs moved to Stable - Automatic fusion in JIT Compiler support for CPU/GPUs - Android NNAPI now in beta Blog: Release:
11
162
644
2
17
224
@ptrblck_de
ptrblck
2 years
Congrats on the release! I see one missed opportunity 😅
Tweet media one
@_willfalcon
William Falcon ⚡️
2 years
⚡ PyTorch @LightningAI 2.0 is out! 🤯 Fireside chat at (12 ET): - Why AI has to stay OpenSource ⚡⚡ - History of PyTorch @LightningAI - New features in 2.0 to help you with foundation models - How we introduced our final, stable API 👉
Tweet media one
3
16
108
4
10
195
@ptrblck_de
ptrblck
4 years
I'm honored to have the opportunity to be the keynote speaker at the Ecosystem Day and am looking forward to meeting as many community members as possible throughout the day! :)
@PyTorch
PyTorch
4 years
We are excited to announce that @ptrblck_de will be the opening keynote for the morning session of PyTorch Ecosystem Day! Register now for #PTED21 here:
Tweet media one
8
21
310
8
14
183
@ptrblck_de
ptrblck
2 years
@soumithchintala @kaikim29 Hahaha, the new design just dropped:
Tweet media one
5
7
178
@ptrblck_de
ptrblck
6 years
@PyTorch implementation of the StyleGAN Generator (Karras et al. @NvidiaAI ) with pretrained weights by @ThomasViehmann and me. It was a pleasure to work with him on this project. Feedback always welcome!
7
46
155
@ptrblck_de
ptrblck
4 years
Amazing work being done in the @PyTorch Team @NVIDIAAI on the new code generation stack enabling automated fusion for dynamic shapes. Check out session S31952 at #GTC21 ! Christian Sarofeen will walk you through the design, benefits, and future directions!
2
14
146
@ptrblck_de
ptrblck
5 years
Check out our new dark theme in the @PyTorch forum! No more tired eyes while browsing through the board :P
Tweet media one
5
3
148
@ptrblck_de
ptrblck
2 years
nvFuser blogpost released! 🥳 Check the tutorial to see how runtime generated CUDA kernels can speed up your workload:
@PyTorch
PyTorch
2 years
Check out this blog post for the latest on nvFuser, our newly default Deep Learning Compiler for NVIDIA GPUs. nvFuser has unique capabilities built just for PyTorch & can achieve great speedups on NLP & Vision Networks with its runtime generated kernels.
Tweet media one
1
33
164
1
23
135
@ptrblck_de
ptrblck
4 years
Check out blendtorch by @cheind : integration of Blender renderings into @PyTorch datasets! "We utilize Eevee, a new physically based real-time renderer, to synthesize images and annotations at 60FPS and thus avoid stalling model training in many cases."
2
23
122
@ptrblck_de
ptrblck
2 years
Hahaha, let's see when ML models will ask you for a minimal and executable code snippet to reproduce the issue 😅
@LandupDavid
David
2 years
Everyone talks about the role of X and Y in building #AGI , but nobody talks about the true backbone of it all - @ptrblck_de
1
1
27
1
2
119
@ptrblck_de
ptrblck
4 years
Thank you, @ThomasViehmann ! I'm humbled by your words and hope my posts can live up to these kudos. Thank you all for being part of this community and for making it so enjoyable. To cite Tom: "I came for PyTorch, but I stayed for the company." :)
@ThomasViehmann
Thomas Viehmann
4 years
The wise and inspiring @ptrblck_de posted his 20.000th post on today, helping literally thousands of people to get ahead with their @PyTorch projects. Thank you! Also, this means 10.000 posts in 2020 alone:
Tweet media one
7
23
237
5
4
96
@ptrblck_de
ptrblck
2 years
@ajayj_ def nan_hook(name): def hook(m, input, output): if not torch.isfinite(output).all(): print("Invalid output in {}".format(name)) return hook for name, module in model.named_modules(): module.register_forward_hook(nan_hook(name))
1
7
87
@ptrblck_de
ptrblck
2 years
@StasBekman You are most likely using CUDA 11.7+, which ships with lazy kernel/module loading and is enabled by default in PyTorch 1.13.1+. (Run `export CUDA_MODULE_LOADING=EAGER` to disable it as a test)
2
1
84
@ptrblck_de
ptrblck
4 years
Native AMP now available in the @PyTorch v1.6 release!
@PyTorch
PyTorch
4 years
v1.6: native mixed-precision support from NVIDIA (~2x perf improvement), distributed perf improvements, new profiling tool for memory consumption, Microsoft commits to developing and maintaining Windows PyTorch. Release Notes: Blog:
5
238
789
1
6
64
@ptrblck_de
ptrblck
26 days
@rasbt @lantiga @PyTorch It was great meeting you finally! Your books and lectures were my reference while digging into ML and now I even got a signed copy of your new book! Time to build an LLM from scratch!
Tweet media one
1
3
66
@ptrblck_de
ptrblck
2 years
@rasbt @shravankumar147 @PyTorch @pytorchlightnin NIT: don't call `.forward` directly as it will skip all registered hooks:
3
4
56
@ptrblck_de
ptrblck
2 years
@karpathy @kaikim29 Thanks for the kind words! :)
1
0
51
@ptrblck_de
ptrblck
3 years
Are you interested in a fully automated GPU code generation system designed and implemented in @PyTorch ? Join this GTC session to learn more about the latest updates from our nvfuser team presented by Christian Sarofeen!
0
7
53
@ptrblck_de
ptrblck
2 years
Exciting update for “tracing with primitives” in @PyTorch ! If you are interested in contributing and helping us implement PyTorch’s logic in a more readable, hackable, and composable way, please reach out!
@mikeruberry
Mike Ruberry
2 years
We recently posted the third update in PyTorch's "tracing with primitives" series. See . Bringing PyTorch's logic into Python is exciting, and we invite the community to participate!
0
21
175
1
3
50
@ptrblck_de
ptrblck
4 years
🤯 I always used `gridspec`, but this looks much easier
@matplotlib
Matplotlib
4 years
We have this awesome function called sublots_mosaic where you can pass us a layout id'ed on name axd = plt.subplot_mosaic( """ ABD CCD """)
Tweet media one
25
383
2K
1
3
49
@ptrblck_de
ptrblck
4 years
Join our #NVIDIAInception #GTC20 session including an introduction to @PyTorch by Suraj and @jrhunt , GPU performance tips by @michaelcarilli , and community highlights from me, @PyTorchLightnin by @_willfalcon , and @kornia_foss by @edgarriba :
0
8
45
@ptrblck_de
ptrblck
7 years
@khanhxuannguyen @karpathy @hugo_larochelle And here are all slides of the DLSS2017 in case you were looking for these:
1
21
46
@ptrblck_de
ptrblck
3 years
@aamaljoseph @PyTorch Haha, happy to help. For the shared screenshot: using `strict=False` will ignore the key errors and will thus skip those parameters, so make sure this is really what you want.
3
0
46
@ptrblck_de
ptrblck
2 years
@wightmanr @soumithchintala @rom1504 For convs enable the cuDNN v8 API via `TORCH_CUDNN_V8_API_ENABLED=1` and `bfloat16` should be used. It's experimental at the moment, but should be enabled soon by default.
4
8
43
@ptrblck_de
ptrblck
4 years
@heghbalz Trying to finetune ResNet for the first time. `lr=1.` should be fine I guess?
1
4
40
@ptrblck_de
ptrblck
2 years
@ajayj_ If you are using modules, you could register a forward hook and add debug print statements for a quick check which layer might be causing it. This code is quite naive, but might give you enough information to start digging into the model.
1
2
37
@ptrblck_de
ptrblck
2 years
@deliprao The PyTorch binaries ship with their own CUDA dependencies, so you would also need to install a proper NVIDIA driver (not the full CUDA toolkit). `torch.compile` uses `ptxas` for its code-gen, and this binary should also ship in the current nightlies now.
3
0
37
@ptrblck_de
ptrblck
5 years
Grab it fast! :)
@PyTorch
PyTorch
5 years
To help developers get started with PyTorch, we’re making the 'Deep Learning with PyTorch' book, written by Luca Antiga and Eli Stevens, available for free to the community:
Tweet media one
36
913
3K
3
7
36
@ptrblck_de
ptrblck
5 years
The Global @PyTorch Summer Hackathon is starting soon! (Aug 9 – Sep 16, 2019) Register here:
1
13
35
@ptrblck_de
ptrblck
2 years
@ID_AA_Carmack Scripting the model and allowing nvFuser to code generate / fuse kernels gives a speedup:
@ptrblck_de
ptrblck
2 years
If you are using @StableDiffusion and want to make it faster, check Simo's post and notebook showing how nvFuser will speed it up:
4
41
286
2
2
33
@ptrblck_de
ptrblck
2 years
A lot of interesting talks you shouldn't miss!
@PyTorch
PyTorch
2 years
Join us at @nvidia GTC next week to hear from PyTorch researchers and contributors on the latest for 2.0, performance and other talks! 🔎 Read more about the PyTorch talks: 🖥️ Register for free:
Tweet media one
1
9
61
0
3
32
@ptrblck_de
ptrblck
2 years
What a great blog post describing how TorchDynamo and nvFuser can speed up models easily. It even provides a notebook to reproduce the results. "we have not seen any drawback implied by the use of this library, the acceleration just comes for free" 🎉🥳
@pommedeterre33
Michaël Benesty
2 years
We have recently tested the excellent TorchDynamo prototype from @PyTorch team and benchmarked it vs @onnxruntime and TensorRT. TL;DR: big boost in inference perf + ease of use without major drawback. 👏 @jansel0 & team!
3
10
79
1
3
31
@ptrblck_de
ptrblck
3 years
@ThomasViehmann @PyTorch To celebrate the day: Tom shares a blog post to visualize interactive graphs in PyTorch:
0
1
28
@ptrblck_de
ptrblck
4 years
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: 1D target tensor expected, multi-target not supported
@olafurw
Ólafur Waage
4 years
You are a demon. How do I summon you?
253
17
214
1
1
30
@ptrblck_de
ptrblck
2 years
"DREAMPlace computes both wire length and density gradients numerically using GPU-accelerated algorithms enabled by the PyTorch framework." 💪
@NVIDIAAIDev
NVIDIA AI Developer
2 years
How can #AI optimize chip design? Learn how AutoDMP optimizes macro placement for chip design with AI and GPUs. Read more: ➡️
2
7
43
0
2
30
@ptrblck_de
ptrblck
4 years
Wow, this looks like a great way to visualize @PyTorch models! Thanks @ThomasViehmann for this awesome notebook.
@ThomasViehmann
Thomas Viehmann
4 years
The code for my (very ad hoc but very flexible) visualization of @PyTorch models is available. With a big thank you to my github sponsor.
0
21
107
0
0
28
@ptrblck_de
ptrblck
4 years
Pretty neat explanation of Particle Filters. I didn't know you could use a hybrid approach in combination with a Kalman Filter.
@AndrewM_Webb
Andrew M. Webb
5 years
Particle filters are general algorithms for inferring the state of a system with noisy dynamics and noisy measurements. Here's an example with a robot in a circular room. Red=true robot, blue=guesses, occasional red line=noisy range sensor measurement. Details in thread 1/
7
106
543
0
4
27
@ptrblck_de
ptrblck
3 years
@ArmanAve @PyTorch Set `reduction='none'` in the loss function, which will return the unreduced loss tensor.
1
0
27
@ptrblck_de
ptrblck
5 years
This looks like an awesome tutorial on writing a @PyTorch backend compiler! Looking forward to try it out. :)
@bwasti
Bram Wasti
5 years
I wrote a tutorial on writing a simple compiler and integrating it into PyTorch
0
6
23
1
10
26
@ptrblck_de
ptrblck
4 years
@porestar @PyTorch Hahaha :D I'm still working on the second point...
Tweet media one
0
0
24
@ptrblck_de
ptrblck
4 years
Yay! Apply now! :)
@PyTorch
PyTorch
4 years
We’re excited to announce the first-ever PyTorch Ecosystem Day #PTED21 , a virtual event designed for our ecosystem and industry communities to showcase their work and discover new opportunities to collaborate and network! Apply now👇
Tweet media one
5
40
173
2
3
25
@ptrblck_de
ptrblck
4 years
What a great interview! "I came for PyTorch, but I stayed for the company." I can only confirm @ThomasViehmann 's statement, who became a very good friend through PyTorch.
@bhutanisanyam1
Sanyam Bhutani
4 years
Giveaway + Release: Here's my interview w 3 great contributors to @PyTorch : All about their book: Deep Learning w PyTorch by @ManningBooks , Open Source and PyTorch. Eli Stevens, @lantiga and @ThomasViehmann Audio: Video:
5
48
124
0
2
24
@ptrblck_de
ptrblck
2 years
@tomcocobrico
Jeffrey 杰弗瑞
5 years
@_mishy @ptrblck_de Maybe he already solved AGI and coded up a bot to answer all those questions while netflixing the whole day
0
0
12
3
0
23
@ptrblck_de
ptrblck
2 years
@rasbt @PyTorch Don't overfit...overcook it!
0
0
23
@ptrblck_de
ptrblck
2 years
@jeremyphoward Thanks for sharing as it's good to see the perf. improvements using the v8 API, CUDA Graphs, and channels-last!
1
1
22
@ptrblck_de
ptrblck
3 years
@AnonPhDStudent @PyTorch Thanks for the kind words! :)
0
0
22
@ptrblck_de
ptrblck
4 years
@francoisfleuret @PyTorch What would be your use case? Do you want to check your script for functional issues, such as indexing errors, or would you like to emulate a specific GPU architecture and check device kernels for potential issues?
5
0
21
@ptrblck_de
ptrblck
2 years
@Sathishtheta I don't create specific time slots, but there is often some time to answer a few questions while waiting for a source build to finish, between meetings, or even when waiting for a docker pull. I just enjoy it and can still learn new stuff from users. :)
0
1
21
@ptrblck_de
ptrblck
5 years
@NVIDIAAI just released the MONAI framework in @PyTorch ! 🎉 Coming from a Biomedical Engineering background, it's great to see these toolkits. Checkout the examples and contributions are always welcome! :) PS: It also comes with @pytorch_ignite examples ;)
0
8
20
@ptrblck_de
ptrblck
2 years
If you are interested to hear more about general nvFuser features, its progress, as well as the new Python interface check the [A41255] GTC session, Tuesday, Sep 20 from 12:00 PM - 12:50 PM PDT:
@ptrblck_de
ptrblck
2 years
nvFuser blogpost released! 🥳 Check the tutorial to see how runtime generated CUDA kernels can speed up your workload:
1
23
135
0
1
20
@ptrblck_de
ptrblck
5 years
Technical blog post:
@ptrblck_de
ptrblck
5 years
@NVIDIAAI just released the MONAI framework in @PyTorch ! 🎉 Coming from a Biomedical Engineering background, it's great to see these toolkits. Checkout the examples and contributions are always welcome! :) PS: It also comes with @pytorch_ignite examples ;)
0
8
20
0
3
20
@ptrblck_de
ptrblck
2 years
@karpathy @Suhail Fully agree and I think "primTorch" might be creating the right path to reduce the complexity again:
@ptrblck_de
ptrblck
2 years
Exciting update for “tracing with primitives” in @PyTorch ! If you are interested in contributing and helping us implement PyTorch’s logic in a more readable, hackable, and composable way, please reach out!
1
3
50
1
1
17
@ptrblck_de
ptrblck
2 years
@jsotterbach @PyTorch @NVIDIAAI I'm a bit confused how data parallelism and ensembles fit together. The former is sending a chunk of the dataset to all model clones (using multiple GPUs) and syncs their updates while the latter trains different models potentially using the same data. How are you combining them?
2
1
19
@ptrblck_de
ptrblck
4 years
@ThomasViehmann @PyTorch Congrats! Your posts are always insightful and we are all lucky to have you in the forums. I also learn a ton from reading your posts as seen here :P
Tweet media one
0
0
18
@ptrblck_de
ptrblck
4 years
@PyTorch @NVIDIAAI The entire @PyTorch Team @NVIDIAAI is also more than happy to answer all your questions :)
1
0
17
@ptrblck_de
ptrblck
5 years
Looking forward to see you all again! :)
@PyTorch
PyTorch
5 years
We're excited to host the second annual PyTorch Developer Conference, featuring talks, discussions and posters from the core-devs, ecosystem, and industry. Date: Oct 10th, 2019 in San Francisco. Space is limited, apply for an invite at
Tweet media one
5
47
174
0
3
17
@ptrblck_de
ptrblck
3 years
Ha! See you all on Wednesday! :)
@ThomasViehmann
Thomas Viehmann
3 years
Something with @PyTorch , TorchDrift, our book Deep Learning with PyTorch and yours truly. Wednesday May 19 at 1pm Pacific time (10pm in Bergamo). #PTCV
Tweet media one
5
12
69
0
1
17
@ptrblck_de
ptrblck
4 years
I'll quickly write this regex with a positive lookahead over multiple lines... yeah, let's skip the "quickly" part.
1
0
16
@ptrblck_de
ptrblck
4 years
@divyayyy Good to hear the posts are helpful :)
0
0
16
@ptrblck_de
ptrblck
3 years
Very interesting thread! In case nostalgia kicks in:
@soumithchintala
Soumith Chintala
3 years
It’s been 5 years since we launched @pytorch . It’s much bigger than we expected -- usage, contributors, funding. We’re blessed with success, but not perfect. A thread (mirrored at ) about some of the interesting decisions and pivots we’ve had to make 👇
26
278
2K
0
1
15
@ptrblck_de
ptrblck
7 months
Dispatch to various executors including `torch.compile` and `nvFuser`:
Tweet media one
1
0
14
@ptrblck_de
ptrblck
2 years
@DrayPAD Could you create a post in the discussion board (if not already done) so that I could take a look at it, please?
1
0
13
@ptrblck_de
ptrblck
4 years
@KevinKaichuang @PyTorch It should still be `num_batches_tracked` and your screenshots show the same name. Am I missing something? 🤔
1
0
13
@ptrblck_de
ptrblck
4 years
@hardmaru @turtlesoupy I'm a verb! Seems like I can ptrblck someone now!
Tweet media one
0
0
14
@ptrblck_de
ptrblck
2 years
@radekosmulski > I guess maybe momentum in Adam is affected, but for anything else sparse should not make any difference? Yes, this would be the difference and would thus yield to different results. Check this gist:
1
3
14
@ptrblck_de
ptrblck
5 years
Great slides! Seems like a nice intro to get started working on PyTorch. Looking forward to the longform version! :)
@ezyang
Edward Z. Yang
5 years
Unannotated slides for my PyTorch Internals talk at the PyTorch NYC meetup yesterday are at (I'm also planning to write a longform version with text.)
6
45
192
0
0
14
@ptrblck_de
ptrblck
6 years
Just created a repo with a few scripts I've written as code samples on the @PyTorch discussion board and which I find quite handy. Maybe you'll find something useful there! ;)
0
2
13
@ptrblck_de
ptrblck
4 years
@edgarriba @OpenAI @PyTorch I guess new swag for the next DevCon is already in production...
Tweet media one
2
0
13
@ptrblck_de
ptrblck
6 years
Thanks @PyTorch for the awesome Developer Conference! A lot of interesting talks in an awesome location. It was great to finally meet so many folks from the community in real life. Huge shout-out to the organizers!
1
1
13
@ptrblck_de
ptrblck
4 years
@terrible_coder Random seed! :P
0
0
13
@ptrblck_de
ptrblck
4 years
@egrefen @PyTorch Better enter with detect_anomaly(True) to grab all Naans.
0
0
13
@ptrblck_de
ptrblck
4 years
@r9y9 Citing @michaelcarilli : `torch.cuda.amp` is the truth! ;)
1
0
12
@ptrblck_de
ptrblck
5 years
@Ritika_Borkar @jeremyphoward @BruceHolmer @PyTorch This exactly! I add a warm-up loop (e.g. for 10 iters) using dummy tensors for the forward and backward pass to let cudnn find the fastest kernels. Also, this code snippet might be useful for profiling multiple processes:
2
1
12
@ptrblck_de
ptrblck
2 years
@rasbt `pytorch_model.to(memory_format=torch.channels_last)` if `amp` is enabled and using `torch.backends.cudnn.benchmark = True` might give you an additional speedup. Note that cudnn.benchmark will profile the kernels for each new input shape, so be careful if dynamic shapes are used
1
0
12
@ptrblck_de
ptrblck
4 years
@omarsar0 I guess `module.register_forward_hook()` as it usually involves some kind of debugging. :)
1
0
12