This is a “soft tech” post. If that isn’t your thing, don’t worry, I’ll be returning to some cool “hard tech” and interesting algorithms after this. I’ve been abusing the heck out of the GPU texture sampler lately, so be on the lookout for some posts on that soon (;
I’m about to show you some of the fastest code there is. It’s faster than the fastest real time raytracer, it’s faster than Duff’s Device.
Heck, despite the fact that it runs on a classical computer, it runs faster than Shor’s Algorithm which uses quantum computing to factor integers so quickly that it breaks modern cryptographic algorithms.
This code also runs faster than Grover’s Algorithm which is another quantum algorithm that can search an unsorted list in O(sqrt(N)).
Even when compiled in debug it runs faster than all of those things.
Are you ready? here it is…
// Some of the fastest code the world has ever seen int main (int argc, char **argc) { return 0; }
Yes, the code does nothing and that is precisely why it runs so fast.
The Secret to Writing Fast Code
The secret to writing fast code, no matter what you are writing is simple: Don’t do anything that is too slow.
Follow me on a made up example to see what I’m talking about.
Let’s say you started with a main() function like i showed above and you decided you want to make a real time raytracer that runs on the CPU.
First thing you do is figure out what frame rate you want it to run at, at the desired resolution. From there, you know how many milliseconds you have to render each frame, and now you have a defined budget you need to stay inside of. If you stay in that budget, you’ll consider it a real time raytracer. If you go outside of that budget, it will no longer be real time, and will be a failed program.
You may get camera control working and primary rays intersecting a plane, and find you’ve used 10% of your budget and 90% of the budget remains. So far so good.
Next up you add some spheres and boxes, diffuse and specular shade them with a directional light and a couple point lights. You find that you’ve used 40% of your budget, and 60% remains. We are still looking good.
Next you decide you want to add reflection and refraction, allowing up to 3 ray bounces. You find you are at 80% of your budget and are still looking good. We are still running fast enough to be considered real time.
Now you say to yourself “You know what? I’m going to do 4x super sampling for anti aliasing!”, so you shoot 4 rays out per pixel instead of 1 and average them.
You profile and uh oh! You are at 320% of your budget! Your ray tracer is no longer real time!
What do you do now? Well, hopefully it’s obvious: DON’T DO THAT, IT’S TOO SLOW!
So you revert it and maybe drop in some FXAA as a post processing pass on your render each frame. Now you are at 95% of your budget let’s say.
Now you may want to add another feature, but with only 5% of your budget left you probably don’t have much performance to spare to do it.
So, you implement whatever it is, find that you are at 105% of your budget.
Unlike the 4x super sampling, which was 220% overbudget, this new feature being only 5% over budget isn’t THAT much. At this point you could profile something that already exists (maybe even your new feature) and see if you can improve it’s performance, or if you can find some clever solution that gives you a performance boost, at the cost of things you don’t care about, you can do that to get some performance back. This is a big part of the job as a successful programmer / software engineer – make trade offs where you gain benefits you care about, at the cost of things you do not care about.
At this point, you can also decide if this new feature is more desired than any of the existing features. If it is, and you can cut an old feature you don’t care about anymore, go for it and make the trade.
Rinse and repeat this process with new features and functionality until you have the features you want, that fit within the performance budget you have set.
Follow this recipe and you too will have your very own real time raytracer (BTW related:Making a Ray Traced Snake Game in Shadertoy).
Maintaining a performance budget isn’t magic. It’s basically subtractive synthesis. Carve time away from your performance budget by adding a feature, then optimize or remove features if you are over budget. Rinse and repeat until the sun burns out.
Ok, so if it’s so easy, why do we EVER have performance problems?
How Fast Code Gets Slow
Performance problems come up when we are not paying attention. Sometimes we cause them for ourselves, and sometimes things outside of our control cause them.
The biggest way we cause performance problems for ourselves is by NOT MEASURING.
If you don’t know how your changes affect performance, and performance is something you care about, you are going to have a bad time.
If you care about performance, measure performance regularly! Profile before and after your changes and compare the differences. Have automated tests that profile your software and report the results. Understand how your code behaves in the best and worst case. Watch out for algorithms that sometimes take a lot longer than their average case. Stable algorithms make for stable experiences (and stable frame rates in games). This is because algorithms that have “perf spikes” sometimes line up on the same frame, and you’ll have more erratic frame rate, which makes your game seem much worse than having a stable but lower frame rate.
But, again, performance problems aren’t always the programmers fault. Sometimes things outside of our control change and cause us perf problems.
Like what you might ask?
Well, let’s say that you are tasked with writing some very light database software which keeps track of all employee’s birthdays.
Maybe you use a hash map to store birthdays. The key is the string of the person’s name, and the value is a unix epoch timestamp.
Simple and to the point. Not over-engineered.
Everything runs quickly, your decisions about the engineering choices you made were appropriate and your software runs great.
Now, someone else has a great idea – we have this database software you wrote, what if we use it to keep track of all of our customers and end user birthdays as well?
So, while you are out on vacation, they make this happen. You come back and the “database” software you made is running super slow. There are hundreds of thousands of people stored in the database, and it takes several seconds to look up a single birthday. OUCH!
So hotshot, looks like your code isn’t so fast huh? Actually no, it’s just that your code was used for something other than the original intended usage case. If this was included in the original specs, you would have done something different (and more complex) to handle this need.
This was an exaggerated example, but this sort of thing happens ALL THE TIME.
If you are working on a piece of software, and the software requirements change, it could turn any of your previous good decisions into poor decisions in light of the new realities.
However, you likely don’t have time to go back and re-think and possibly re-work every single thing you had written up to that point. You move onward and upward, a little more heavy hearted.
The target moved, causing your code to rot a bit, and now things are likely in a less than ideal situation. You wouldn’t have planned for the code you have with the info you have now, but it’s the code you do have, and the code you have to stick with for the time being.
Every time that happens, you incur a little more tech debt / code complexity and likely performance problems as well.
You’ll find that things run a little slower than they should, and that you spend more time fighting symptoms with small changes and somewhat arbitrary rules – like telling people not to use name lengths more than 32 characters for maximum performance of your birthday database.
Unfortunately change is part of life, and very much part of software development, and it’s impossible for anyone to fully predict what sort of changes might be coming.
Those changes are often due to business decisions (feedback on product, jockying for a new position in the marketplace, etc), so are ultimately what give us our paychecks and are ultimately good things. Take it from me, who has worked at ~7 companies in 15 years. Companies that don’t change/adapt die.
So, change sucks for our code, but it’s good for our wallets and keeps us employed 😛
Eventually the less than ideal choices of the past affecting the present will reach some threshold where something will have to be done about it. This will likely happen at the point that it’s easier to refactor some code, than to keep fighting the problems it’s creating by being less than ideal, or when something that really NEEDS to happen CAN’T happen without more effort than the refactor would take.
When that happens, the refactor comes in, where you DO get to go back and rethink your decisions, with knowledge of the current realities.
The great thing about the refactor is that you probably have a lot of stuff that your code is doing which it doesn’t really even NEED to be doing.
Culling that dead functionality feels great, and it’s awesome watching your code become simple again. It’s also nice not having to explain why that section of code behaves the way it does (poorly) and the history of it coming to be. “No really, I do know better, but…!!!”
One of the best feelings as a programmer is looking at a complex chunk of code that has been a total pain, pressing the delete key, and getting a little bit closer back to the fastest code in the world:
// Some of the fastest code the world has ever seen int main (int argc, char **argc) { return 0; }
PS: Another quality of a successful engineer is being able to constantly improve software as it’s touched. If you are working in an area of code, and you see something ugly that can be fixed quickly and easily, do it while you are there. Since the only constant in software development is change, and change causes code quality to continually degrade, make yourself a force of continual code improvement and help reverse the flow of the code flowing into the trash can.
Engines
In closing, I want to talk about game engines – 3rd party game engines, and re-using an engine from a different project. This also applies to using middleware.
Existing engines are great in that when you and your team know how to use them, you can get things set up very quickly. It lets you hit the ground running.
However, no engine is completely generic. No engine is completely flexible.
That means that when you use an existing engine, there will be some amount of features and functionality which were made without your specific usage case in mind.
You will be stuck in the world where from day 1 you are incurring the tech debt type problems I describe above, but you will likely be unable or unwilling to refactor everything to suit your needs specifically.
I don’t mention this to say that engines are bad. Lots of successful games have used engines made by other people, or re-used engines from previous projects.
However, it’s a different kind of beast using an existing engine.
Instead of making things that suit your needs, and then using them, you’ll be spending your time figuring out how to use the existing puzzle pieces to do what you want. You’ll also be spending time backtracking as you hit dead ends, or where your first cobbled together solution didn’t hold up to the new realities, and you need to find a new path to success that is more robust.
Just something to be aware of when you are looking at licensing or re-using an engine, and thinking that it’ll solve all your problems and be wonderful. Like all things, it comes at a cost!
Using an existing engine does put you ahead of the curve: At day 1 you already have several months of backlogged technical debt!
Unfortunately business realities mean we can’t all just always write brand new engines all the time. It’s unsustainable
Agree / Disagree / Have something to say?
Leave a comment below, or tweet at me on twitter: @Atrix256