Skip to content

How Performance Became the Nemesis of the Secure Python Code

 Secure Python Code

Published on

A young man once impressed by a serene, wise man asked him, "Why are you never in a hurry?" The wise man pulled out a fish net from his bag and said: "If you choose well the place to cast the net, you will always have food. Such places are never found where people are in a hurry."

Nothing forecasts the future of a programming language better than the epos of its community. For Python, one word dominates the discussions of the past few years: performance.

Compared to others, the Python community seems to be greatly concerned with performance, and it goes far beyond optimization. While many up-and-coming languages promise better performance, they place more emphasis on ease of use. For example, GoLang gurus dare to argue  that "…readability beats performance every time… [Because] most programs spend a good deal more time being read than they do being executed."  

Python is great because its core philosophy of simplicity meets this market reality: engineering hours are more expensive than CPU peta-cycles. You can throw more compute instances at a problem if the source of your problem is a rush of eager customers. Instances are far easier to acquire than developers.

However, the current Python innovation arc tilts against this core strength. We have drifted away from simple and readable code. And it's diluting the security of Python applications. 

A Focus on Python Performance Overcomplicates Code 

Let's consider three performance trends that are concerning for application security. More performance is better. But, when the cost of performance is great, and the benefit is marginal, we must ask ourselves: "what are we going to accomplish?"

1. The Proliferation of Native Code

It is no secret that much of cutting-edge Python code is shrink-wrapped C++. This makes sense for some computational fields. If the same operations repeat over and over, the code invites optimization.

Tensor Flow contains 61.5% of C++ code while being one of the more popular Python packages. PyTorch exhibits similar qualities; the fast repeated crunching of similar operations boosts the output significantly when optimized even a little bit. In such cases, the decision to switch to native code is the top source of performance gains by far. But admittedly, the propensity to lace frameworks with native code, especially in the name of ~8% performance gains, is becoming a contagious practice that complicates security.

There was a mild preference for "pure" packages ten years ago. Easier to maintain and guaranteed to run independently of the architecture. Today, a package with Rust dependencies will garner attention. For example, Argonautica, Polaroid, and many others mostly wrap Rust crates. Nothing wrong with this in principle. Something is fast. Why not wrap it? Rust is almost impossible to beat on performance. 

Why waste time to re-write it in pure Python?

First, with the creeping shortage of alternatives, the preponderance of native bindings seeds security threats. Native code complicates the supply chain, causing escalations in the number and severity of attacks. Supply chain attack is the chart-topping song of 2020 and 2021. A Rust crate wrapped into Python is now two dependencies, not one, which requires more than double the attention to secure coding principles. 

Is doubling all our dependencies now the norm?

Second, native bindings often require compatibility extensions, which are harder to write and test. They run into OS fault lines due to obscure implementation details. This condition leads to bugs and vulnerabilities.

Third, native code can wreak havoc in the system because of a lower access level to hardware. What is typically a resource leak in Python turns into buffer overflow in native code. The latter is far more dangerous for system and data integrity.

And pardon the cynicism; it is harder to trust native statically typed code in instances when it is primarily written or integrated by Pythonistas. When you professionally operate every day with an interpreter and garbage collector watching your back, switching over to native code bears a heavy cognitive load. Python experience does not create the right kind of mental state necessary for writing secure low-level code. Can you keep track of all the machine memory management particulars you've used today if you haven't thought about it in a month? Few can.

2. Async Complexity and Instability

In a watershed 2020 article, Cal Peterson argues that Python async code is hardly faster under more realistic conditions. The benchmarks tell one story in framework documentation and a different story when running production-level business logic-laden service paired up with a relational database. Async frameworks provide insignificant performance gains despite grand promises. He highlights the cardinal problem – async frameworks become less stable under load.

But we should be even more concerned with the packages that are in the process of making the switch to async. It is not always clear that they are better off after investing the effort. Many, like Django, become halfway stuck in transition, then connected with all the duct tape in order to push the transition forward. Now the code is mind-bending and sprinkled with async/sync context adapters. Is it meaningfully faster? Unlikely. 

We traded simplicity and readability for what exactly?

This kind of technical debt is not conducive to package maturity and security. Nor does it make the framework more future-proof if a newer async framework like Sanic eventually overtakes it. Many developers come to regret caving to peer pressure, having started the async transition too early by yielding to performance promoters within their team. The reality is that a large-scale unfinished transition leaves the code base more complicated and worse off than when the transition began. Arguably, the effort spent on an incomplete transition is better spent on refactoring and bug fixes.

3. JIT Compiler Creep

PyPy usage appears to be growing. While most of the community lives in a world of CPython, production engineers increasingly deploy JIT (Just-In-Time) flavors of Python powered by a custom compiler. Each flavor has its own additional set of issues with package compatibility and different ways of addressing it.

For example, Numba documentation includes a rather long trouble-shooting page. None of the issues on it are trivial. One of them is ironically titled "The compiled code is too slow." Stack Overflow contains plenty of Numba-related questions which deal with dependencies, installation problems, and bugs.

Each compiler is a minefield of potential hard-to-detect bugs and vulnerabilities. They are often distributed separately from typical dependencies and can easily fall behind general security fixes of Python releases. For example, twelve vulnerabilities remained active for months on PyPy Gentoo distributions after they were already fixed! Issues like CVE-2020-29651 affected some operating systems and not others because different PyPy sourcing was used for the installations, etc.

Indeed, similar kinds of security threats can appear with CPython as well. Yet, we break a precious DevOps principle that development and production environments must deviate minimally. If you use a JIT compiler in production, use it in development. That much is clear. 

Ask yourself: how many of your project dependencies were developed and tested with that same flavor and version of a JIT compiler? Crickets.

As Python is maturing, the compiler flavors are diverging primarily in the name of performance. To name a few:

  • Numba
  • Pyston
  • IronPython
  • Psyco
  • Pyjion
  • Unladen Swallow (defunct)
  • Nuitka
  • Shedskin
  • Cinder

Why would anybody write yet another compiler for Python? How much faster is it than the previous one? If we put each against a business logic service paired with a relational database instead of formula crunching, the differences will be marginal.

Adding a performant compiler is not the same as adding a regular dependency. All your code will go through it. All your code will be slightly altered in how it interacts with the OS resources and layers. Your entire application inherits all of the deficiencies and incompatibilities, foreseen and unforeseen, of the chosen compiler.

When the compiler is updated, your entire application's code is affected. That update can introduce a potential mistake or a circumstantial failure trigger to any part of your codebase. This is a significant security downside that would halt the compiler creep if fully grasped by the Python community.

Secure Python Code Starts with Simplicity & Readability

To sum up: we have been running in circles. It is time to go back to the basics. Performance should not be the end. Simpler and readable code yields to performance optimization when needed. It contains fewer bugs and vulnerabilities. It keeps value longer.

One Python framework immediately comes to mind as an example to imitate. Flask promised to be everything "you need and nothing you don't." It delivered. It had a few vulnerabilities over the past ten years, but the list is notably short. It has four core dependencies and contains 99.9% Python code.

Python has a bright future, but the next ten years will be a stretch of security tumult if nothing changes. The fixation on performance was the seed, the native code proliferation was the blooming flowers, and the compilers are the berries – and some may grow to the size of bitter melons.

The Zen of Python calls us to move in the opposite direction. It may be time to slow down a little so that we can move forward securely.