7 days of WordPress plugins, themes & templates - for free!* Unlimited asset downloads! Start 7-Day Free Trial
Advertisement
  1. Code
  2. Python

Understand How Much Memory Your Python Objects Use

Scroll to top
Read Time: 11 mins

Python is a fantastic programming language. It is also known for being pretty slow, due mostly to its enormous flexibility and dynamic features. For many applications and domains, it is not a problem due to their requirements and various optimization techniques. It is less known that Python object graphs (nested dictionaries of lists and tuples and primitive types) take a significant amount of memory. This can be a much more severe limiting factor due to its effects on caching, virtual memory, multi-tenancy with other programs, and in general exhausting the available memory, which is a scarce and expensive resource.

It turns out that it is not difficult to figure out how much memory is actually consumed. In this article, I'll walk you through the intricacies of a Python object's memory management and show how to measure the consumed memory accurately.

In this article, I focus solely on CPython—the primary implementation of the Python programming language. The experiments and conclusions here don't apply to other Python implementations like IronPython, Jython, and PyPy.

Depending on the Python version, the numbers are sometimes a little different (especially for strings, which are always Unicode), but the concepts are the same. In my case, am using Python 3.10.

As of 1st January 2020, Python 2 is no longer supported, and you should have already upgraded to Python 3.

Hands-On Exploration of Python Memory Usage

First, let's explore a little bit and get a concrete sense of the actual memory usage of Python objects.

The sys.getsizeof() Built-in Function

The standard library's sys module provides the getsizeof() function. That function accepts an object (and optional default), calls the object's sizeof() method, and returns the result, so you can make your objects inspectable as well.

Measuring the Memory of Python Objects

Let's start with some numeric types:

Interesting. An integer takes 28 bytes.

Hmm… a float takes 24 bytes.

Wow. 104 bytes! This really makes you think about whether you want to represent a large number of real numbers as floats or Decimals.

Let's move on to strings and collections:

OK. An empty string takes 49 bytes, and each additional character adds another byte. That says a lot about the tradeoffs of keeping multiple short strings where you'll pay the 49 bytes overhead for each one vs. a single long string where you pay the overhead only once.

The bytes object has an overhead of only 33 bytes. 

Lets look at lists.

What's going on? An empty list takes 56 bytes, but each additional int adds just 8 bytes, where the size of an int is 28 bytes. A list that contains a long string takes just 64 bytes.

The answer is simple. The list doesn't contain the int objects themselves. It just contains an 8-byte (on 64-bit versions of CPython) pointer to the actual int object. What that means is that the getsizeof() function doesn't return the actual memory of the list and all the objects it contains, but only the memory of the list and the pointers to its objects. In the next section I'll introduce the deep\_getsizeof() function, which addresses this issue.

The story is similar for tuples. The overhead of an empty tuple is 40 bytes vs. the 56 of a list. Again, this 16 bytes difference per sequence is low-hanging fruit if you have a data structure with a lot of small, immutable sequences.

Sets and dictionaries ostensibly don't grow at all when you add items, but note the enormous overhead.

The bottom line is that Python objects have a huge fixed overhead. If your data structure is composed of a large number of collection objects like strings, lists and dictionaries that contain a small number of items each, you pay a heavy toll.

The deep\_getsizeof() Function

Now that I've scared you half to death and also demonstrated that sys.getsizeof() can only tell you how much memory a primitive object takes, let's take a look at a more adequate solution. The deep\_getsizeof() function drills down recursively and calculates the actual memory usage of a Python object graph.

There are several interesting aspects to this function. It takes into account objects that are referenced multiple times and counts them only once by keeping track of object ids. The other interesting feature of the implementation is that it takes full advantage of the collections module's abstract base classes. That allows the function very concisely to handle any collection that implements either the Mapping or Container base classes instead of dealing directly with myriad collection types like: string, Unicode, bytes, list, tuple, dict, frozendict, OrderedDict, set, frozenset, etc.

Let's see it in action:

A string of length 7 takes 56 bytes (49 overhead + 7 bytes for each character).

An empty list takes 56 bytes (just overhead).

A list that contains the string "x" takes 124 bytes (56 + 8 + 56).

A list that contains the string "x" five times takes 156 bytes (56 + 5\*8 + 56).

The last example shows that deep\_getsizeof() counts references to the same object (the x string) just once, but each reference's pointer is counted.

Treats or Tricks

It turns out that CPython has several tricks up its sleeve, so the numbers you get from deep\_getsizeof() don't fully represent the memory usage of a Python program.

Reference Counting

Python manages memory using reference counting semantics. Once an object is not referenced anymore, its memory is deallocated. But as long as there is a reference, the object will not be deallocated. Things like cyclical references can bite you pretty hard.

Small Objects

CPython manages small objects (less than 256 bytes) in special pools on 8-byte boundaries. There are pools for 1-8 bytes, 9-16 bytes, and all the way to 249-256 bytes. When an object of size 10 is allocated, it is allocated from the 16-byte pool for objects 9-16 bytes in size. So, even though it contains only 10 bytes of data, it will cost 16 bytes of memory. If you allocate 1,000,000 objects of size 10, you actually use 16,000,000 bytes and not 10,000,000 bytes as you may assume. This 60% extra overhead is obviously not trivial.

Integers

CPython keeps a global list of all the integers in the range -5 to 256. This optimization strategy makes sense because small integers pop up all over the place, and given that each integer takes 28 bytes, it saves a lot of memory for a typical program.

It also means that CPython pre-allocates 266 * 28 = 7448 bytes for all these integers, even if you don't use most of them. You can verify it by using the id() function that gives the pointer to the actual object. If you call id(x) for any x in the range -5 to 256, you will get the same result every time (for the same integer). But if you try it for integers outside this range, each one will be different (a new object is created on the fly every time).

Here are a few examples within the range:

Here are some examples outside the range:

Python Memory vs. System Memory

CPython is kind of possessive. In many cases, when memory objects in your program are not referenced anymore, they are not returned to the system (e.g. the small objects). This is good for your program if you allocate and deallocate many objects that belong to the same 8-byte pool because Python doesn't have to bother the system, which is relatively expensive. But it's not so great if your program normally uses X bytes and under some temporary condition it uses 100 times as much (e.g. parsing and processing a big configuration file only when it starts).

Now, that 100X memory may be trapped uselessly in your program, never to be used again and denying the system from allocating it to other programs. The irony is that if you use the processing module to run multiple instances of your program, you'll severely limit the number of instances you can run on a given machine.

Memory Profiler

To gauge and measure the actual memory usage of your program, you can use the memory\_profiler module. I played with it a little bit and I'm not sure I trust the results. Using it is very simple. You decorate a function (could be the main function) with an @profiler decorator, and when the program exits, the memory profiler prints to standard output a handy report that shows the total and changes in memory for every line. Here is a sample program I ran under the profiler:

Here is the output:

As you can see, there is 17.3 MB of memory overhead. The reason the memory doesn't increase when adding integers both inside and outside the [-5, 256] range and also when adding the string is that a single object is used in all cases. It's not clear why the first loop of range(100000) on line 9 adds 0.8MB while the second on line 11 adds just 0.7MB and the third loop on line 13 adds 0.8MB. Finally, when deleting the a, b and c lists, -0.6MB is released for a, -0.8MB is released for b, and -0.8MB is released for c.

How To Trace Memory Leaks in Your Python application with tracemalloc

tracemalloc is a Python module that acts as a debug tool to trace memory blocks allocated by Python. Once tracemalloc is enabled, you can obtain the following information :

  • identify where the object was allocated
  • give statistics on allocated memory
  • detect memory leaks by comparing snapshots

Consider the example below:

Explanation

  • tracemalloc.start()—starts the tracing of memory
  • tracemalloc.take_snapshot()—takes a memory snapshot and returns the Snapshot object
  • Snapshot.statistics()—sorts records of tracing and returns the number and size of objects from the traceback. lineno indicates that sorting will be done according to the line number in the file.

When you run the code, the output will be:

Conclusion

CPython uses a lot of memory for its objects. It also uses various tricks and optimizations for memory management. By keeping track of your object's memory usage and being aware of the memory management model, you can significantly reduce the memory footprint of your program.

This post has been updated with contributions from Esther Vaati. Esther is a software developer and writer for Envato Tuts+.

Advertisement
Did you find this post useful?
Want a weekly email summary?
Subscribe below and we’ll send you a weekly email summary of all new Code tutorials. Never miss out on learning about the next big thing.
Advertisement
Looking for something to help kick start your next project?
Envato Market has a range of items for sale to help get you started.