@SheriefFYI
Go back to windows xp and era appropriate software and tell me why every single thing got slower except for raw compute power. Why does modern Word need a splash screen while it loads? It's a goddamn paper simulator. How is it worse than ever?
people here talking about how C/C++ is unsafe, no good, don't learn it
Your browser was written in C. Your OS was written in C. AAA games are in C. game engines are in C. "safe" languages are (or were) in C. Your art tools, written in C. Audio tools, C.
You know that thing where the real user interface takes 5 seconds to load, so in the meantime you get a pulsating gray mockup? That didn't exist in the 80s, 90s, or 00s.
at a job interview almost 20 years ago, I was asked to list all the ways to optimize a given piece of mathy code, and I deliberately avoided "early out" and was dinged, and had to explain to this team that early out is slow in modern times
Why do C programmers always obfuscate their code? Are they trying to save space? Do they have to pay for each letter? Are they using some trial version of GCC that doesn't allow actual words in variable names?
@Lambda_Coder
I am convinced that 95% of it is attributable to JavaScript callbacks, which call into callbacks, etc. each with no regard for performance
I would like more software to look like this. At least I would know for sure that the computing power isn't *entirely* devoted to rendering a spinning blue circle animation
the console game industry collectively adopted C++ in the PS2 era, tried to do everything textbook OOP style, found out the hard way it was mostly impractical, and then abandoned it, in a sort of speedrun
First year university, I had an OOP professor.
He talked a lot about what's OOP and how it's different from procedural.
At the time I was learning C++ by studying the half-life SDK. So I shows him this code and asked him about it
He said it's not OOP.
whenever anyone makes a statement like "C doesn't need generics" i kinda have to ask whats like the biggest concern here for you not wanting them. because the three biggest pain points i hear are compile times, error messages, and, to a lesser extent, shared lib exports
all of these things are in C because beyond some threshold, nothing else will deliver *predictably* higher performance at both compilation time and runtime, except maybe a shader, which is useful in fewer contexts
When you join a AAA studio they take you to a smoke filled back-room and show you the sacred email threads - passed down from generation to generation of employees - with the wildest bugs you’ve ever seen in your life. This becomes the hardest part of the NDA to fulfill.
This kind of UI should be illegal. I always end up typing too fast and the textbox focus switch can't keep up.
Use a single textbox. If you want to make the required number of digits obvious, put some underlines in the textbox.
The internet revolution of the 90s has now concluded. Search engines are now just the yellow pages, and operating systems are now just surveillance cameras. All of the cool stuff is now behind a VPN.
This isn’t an April Fools joke by the way. It sucks because I distinctly remember one of the reasons people quit Skype was due to the intrusive ads and now we are back full circle.
so, hear me out: a game console with a modern APU, but only 128 bytes of RAM and no frame buffer, like an Atari 2600
You're "chasing the beam" again, but you can raytrace into mandelbulbs this time around?
𝐖𝐡𝐚𝐭 𝐀𝐫𝐞 𝐭𝐡𝐞 𝐆𝐫𝐞𝐞𝐧𝐞𝐬𝐭 𝐏𝐫𝐨𝐠𝐫𝐚𝐦𝐦𝐢𝐧𝐠 𝐋𝐚𝐧𝐠𝐮𝐚𝐠𝐞𝐬?
The study below runs 10 benchmark problems in 28 languages [1]. It measures the runtime, memory usage, and energy consumption of each language. The abstract of the paper is shown below.
“This
young programmer:
write function to process single items first, write batch processing in terms of single items
old programmer:
write function to process batch first, write single-item processing in terms of batches
You wouldn't drive all the way out to the supermarket to buy one slice of bread.. so why would you drive all the way out to a random memory address, to read one datum?
even among performance minded folks, there's a persistent myth that malloc is "pretty fast" when in fact it's so performance hostile that big game productions ban it at link time, to eliminate the damage it causes
finally had "the talk" with my boys today, and now they're better prepared for life
we single-stepped through Super Mario Land in NO$GMB debugger, talked about the stack, program counter, and hex. by the end they were deriving addressable memory of Z80 from 1st principles
This might be heresy but:
1. Code reviews are a massive productivity tax with tiny quality benefits
2. They should not be mandated
3. The author should feel free to request a review if they want it
4. If you don't trust your engineers, invest more in CI, or hire better ones
Similar to the other thread. What are your most important best practice implementation tips?
Mine:
NEVER write directly to the destination file. Write to temp file with unique/random file name and do atomic rename to dest file when done. Guarantees no 1/2-written files.
Yours?
@lemire
one more reason not to allocate tiny amounts, is that a pointer itself is 8 bytes. if you allocate only 16 bytes, the pointer returned already adds a 50% overhead
even in the @'s here there's people thinking you need a team of 10 and 6 years to make an engine, my point is literally just that a bare minimum 2D game engine is like, a few thousand lines of code
ANNOUNCEMENT: after 11+ years working on the ice team at Naughty Dog, today is my last day. What a blast it's been working at this studio full of smart and talented folks. Watch this space for what comes next...
as a former game boy dev, this kind of blows my mind. the efficient algorithm for updating a decimal score was a part of every electromechanical pinball machine, before the IC was invented
Super Mario World is so staggeringly inefficient at displaying the player's score that in a worst-case scenario, up to 16.548% of the console's computing power is spend entirely on displaying the score on every frame. Details in reply.
Its hard to write abstractions in C, because its so painfully clear what the cost of abstractions are. In other languages abstractions appear to be free, because there is already so much stuff going on behind you back you don't notice te difference.
i keep asking this question: is there any data to support the rules that were put in place to make your code more "readable" or "maintainable"?
would love an honest answer
The entire point of the C++ venture, is to establish an ever expanding event horizon, beyond which nobody can claim to truly understand a programming language. This powers a succession of books, conventions, speakers
I find interesting how the technique used mainly on Doom and Quake (BSP) was so incredibly useful for the time (for culling, sw rendering, depth sorting and even collision detection), but nowadays it almost completely fell into disuse.. (short 🧵👇 why!)
At this point I'm no longer convinced that templates - the culprit for both source and binary bloat - belong as a language feature.
I'd much rather have C with a code generation pre-process step, than have C++ with full templates in source
@FlohOfWoe
@nice_byte
That, and, allocators are for big things, at least a memory page. If you find yourself calling any allocator to get space for 2 integers, you're delivering a pizza in an empty 18 wheeler truck
@qmannj
in the code in question, it was a test you could do early in the function, the result of which enabled you to skip a few math instructions. but it's the kind of function you'd likely call 1,000s of times in a row, good candidate for ILP / SIMD ops
@trobblet
unity's ECS, like a textbook ECS, is all about laying data out in packed tables, rather than in isolated objects, which plays more to the strengths of what hardware actually does
It's 2019 and people are still writing code to read files into memory buffers, sometimes byte-by-byte. People, memory-mapped files are simpler, faster, and lower-maintenance. It's really worth it to check them out, if you haven't already.
Bounding Half-Space Hierarchy:
unsorted AABB array reported 4540 intersections in 1.228813 seconds
sorted AABB array reported 4540 intersections in 0.017620 seconds
not sure what to compare to. how else to sort array of 3D objects for sublinear search?
this is pretty close to what i'd expect to see in custom silicon, and better than the thing I wrote with integer divides
might trim out that lookup table tho
lossy render targets are a grim omen for a future, in which CPU buffers also become lossy, and determinism of output becomes a thing you pay for on an as-needed basis
A15 GPU tech talk is now available.
The GPU has increased core count, increased f32 math rate per core, lossy renderable textures that saves memory storage & bandwidth, support for sparse depth & stencil textures and new SIMD shuffle & fill instructions.
those people who complain about energy waste from block chain - wonder how they'd feel about the time and energy wasted by garbage-collected computer languages, which underly a majority of software.
inheritance is waaaay oversold as some kind of uber-fundamental science truism, but in languages without it, i'm never tempted to implement inheritance
One of the greatest design principles I have carried over from my OOP days is "favor composition over inheritance". Applicable to just about any style of programming, underappreciated outside OOP
If you want to know if someone is a graphics person or a hardware person, ask them what problem mip maps solve.
A graphics person will say that it solves the aliasing problem, and a hardware person will say that it solves the texture bandwidth problem.
@SebAaltonen
Long ago while working on the PS4 SDK texture compressor, I added a mode that did a whole-image PCA, transformed the image into that space, and then compressed as 2-channel BC4. for most game textures, looked "just as colorful" as 3 channels, but with much higher precision
You can architect a whole product on an assumption (e.g. "malloc is free") that turns out to be false, and at some point in your scaling journey, the whole product is uniformly slow and there is nothing worth optimizing
Hotspot performance engineering fails:
Developers often believe that software performance follows a Pareto distribution: 80% of the running time is spent in 20% of the code. Using this model, you can write most of your code without any care for performance and focus on the
This concept can be driven further: if you have 1,000 objects, don't put *any* of the data "in the object", store it all in some central location for bulk processing. Computers like to work on big packed buffers of uniform data, not "objects"
Don’t use a parent pointer/handle. Don’t track your children in an array in your tools. What am I blabbing about?
Check out Ron Pieket’s binary relation library. Sure to challenge the way you organize data in your editors.
"vulkan ray tracing" was my cue that the mask of claiming to be a low-level hardware wrapper had finally come off, and the transformation into OpenGL Next was complete
I wish Vulkan would start removing older functions and structures instead of needing extensions constantly…be bold and break old code. Otherwise becomes like Opengl :(
"ok kids, here's a practical application of math. if you draw a map of countries, how many colors do you need, such that no countries that share a border have the same color?"
kids: "we give up"
graphic designer wife: "4"
"yes! but why"
"CMYK"
made github with AABB vs AABO showdown. culls better, is faster
AABB reported 45333 intersections in 3.073578 seconds
AABO reported 41848 intersections in 2.297107 seconds
Since modern software spends about 50% of its time in a spinning blue circle... Just imagine how much faster computers could be, if we had dedicated hardware for the spinning blue circle