It is quite common for the average frame rate in a Unity game to be at or near the target but still have the gameplay feel choppy. Frequent or ill-timed garbage collector runs can cause spikes in frame times that, while not big enough to significantly affect average frame rates, can have an enormous impact on the user-perceived “smoothness” of the experience. Because memory management in C# is automatic, solving allocation-related slowdowns can be challenging. Control over the garbage collector’s behavior is indirect at best, and at times it can be necessary to work around it rather than with it.

In this post we’ll briefly look at some well-established techniques for avoiding garbage collector-related performance issues in Unity, then move on to some advanced techniques developers can lean on when the common approaches aren’t enough.

Understanding memory allocation in Unity

First, let’s review two broad types of memory allocations, stack allocations and heap allocations.

Stack memory is a small region of memory used to store local values within a function. Since the lifetimes of these allocations match the scope of the function, they can be “deallocated” automatically at the end of the function and do not need to be tracked by the garbage collector. Since the stack is implemented as a first-in, last-out buffer, allocating and deallocating stack memory is as simple as moving a stack pointer and can be considered more-or-less free.

Heap memory is a much larger region of memory. Objects which are too large to fit on the stack or that need to persist beyond the allocating scope are allocated on the heap. Objects managed by the C# garbage collector are always allocated on the heap. In C#, classes are reference types and structs are value types. This semantic difference between the two has other implications on how they are used, but for our context the important distinction is how the two are allocated. Reference types are always allocated on the heap. Value types may be allocated on the stack or the heap, depending on how they are used. A struct instance declared as a local variable will be allocated on the stack, but another declared as a member of a reference type or assigned to an object or interface reference will end up on the heap (this is called “boxing“). Arrays are also reference types and allocated on the heap, even if the type of array is a value type.

In short, the garbage collector keeps track of all allocations on the managed heap and automatically cleans up memory that is no longer in use. You are no longer responsible for freeing the memory you allocate, but this comes at the cost of some additional overhead. After all, modern garbage collectors are quite complex and well optimized, but they can still be slow when keeping track of potentially millions of objects and references.

Remember that objects are not deallocated the moment the last reference to them is released. Instead, the garbage collector will do so at some later point. It can be hard to predict when the garbage collector will run these potentially expensive cleanup operations, which is what can ultimately lead to the choppy user experience.

Techniques for avoiding garbage collector spikes with managed memory

In this section, we’ll cover a few common approaches to reducing costly garbage collector spikes.

Avoid unnecessary allocations

The most basic principle of optimizing for the garbage collector is to simply avoid allocations when they aren’t necessary. There are many ways to allocate memory unintentionally when working in Unity, so it’s important to always use the profiler to confirm that you aren’t allocating memory when it isn’t needed. Here are a few key tips to reduce allocations:

Favor structs over classes
Consider using structs instead of classes for temporary objects. When classes are more appropriate or you need arrays, consider caching the object and reusing it each frame.

Be careful with string manipulation
String manipulation is a common source of unintended allocations. Strings are immutable in C#, so every time you concatenate or otherwise modify a string, a new string is allocated. Consider using StringBuilder to reduce the number of allocations when working with strings.

// This allocates 5 strings every time it is called
public string Vector2ToString(Vector2 vector)
{
    return "x: " + vector.x + " y: " + vector.y;
}

// This reuses a StringBuilder instance and only allocates
// one new string
private StringBuilder builder = new StringBuilder();
public string Vector2ToString(Vector2 vector)
{
	builder.Clear();
	builder.Append("x: ")
		.Append(vector.x)
		.Append(" y: ")
		.Append(vector.y);
	
    return builder.ToString();
}

Avoid creating APIs which return new arrays/collections
Have your APIs take a target collection as a parameter instead.

// Allocates a new array every time it is called
public float[] GetRandomValues()
{
	float[] values = new float[10];
	for (int i = 0; i < values.Length; ++i)
	{
		values[i] = Random.value;
	}
	return values;
}

// Allow the caller to decide to allocate a new array or reuse an existing one
public void GetRandomValues(float[] values)
{
	if (values == null) return;
	for (int i = 0; i < values.Length; ++i)
	{
		values[i] = Random.value;
	}
}

Avoid boxing structs
Don’t assign a struct instance to a reference or parameter of type object. When building APIs which take interface parameters that may be struct instances, consider using generics with the interface as a generic constraint rather than as the parameter type.

public interface IFoo { }
public class Foo : IFoo { }
public struct Bar : IFoo { }

public void BoxedInterface(IFoo foo)
{
	...
}

public void GenericInterface<T>(T foo) where T : IFoo
{
	...
}

public void Update()
{
	Foo foo = new Foo();
	Bar bar = new Bar();
	
    // These don't allocate
	BoxedInterface(foo);
	GenericInterface<Foo>(foo);
	GenericInterface<Bar>(bar);

	// This allocates because bar is "boxed"
	BoxedInterface(bar);
}

Be cautious when using closures
Value types outside of an anonymous function are boxed in a special class when passed to anonymous functions.

float number = 0f;

// This allocates a special reference type to wrap the "number" variable
Action addRandomValueDelegate = () =>
{
	number += Random.value;
};

Object pooling

Reusing allocated objects is one of the most common techniques to avoid garbage collection issues. When many instances of an object are needed, it’s helpful to preallocate a large collection of these objects, then remove them from the pool when they’re needed. Once the object is no longer needed, it can be returned to the pool for later use somewhere else. This technique is so common that Unity introduced their own set of generic object pool solutions in Unity 2021.

Whether you choose to use Unity’s solution or implement your own, you’ll almost certainly encounter a problem in Unity that is best solved by object pooling.

Object pooling is safe and broadly applicable. It is especially useful in Unity for GameObjects since creating and destroying GameObjects and Components has additional overhead beyond the memory allocation. Keep in mind though that object pooling does not completely avoid the garbage collector. Instead, it allows for some indirect control over when the objects in the pool are cleaned up. The garbage collector will still be responsible for cleaning the objects in the pool when the pool itself is released, but you can choose to do so at convenient times such as during a scene transition, or they can keep the pool alive for the entire lifetime of the app.

Manually timing collections

Although garbage collection is intended as an automatic process, it is possible to exert some control using the UnityEngine.Scripting.GarbageCollector API. It’s possible to completely disable the garbage collector this way, or to configure it so it only collects when manually invoked. Since no managed heap memory will be freed while automatic collection is disabled, new allocations will continually increase the size of the managed heap until either the garbage collector is reenabled or the application runs out of memory and is terminated. For this reason, disabling the garbage collector should be considered an unsafe practice, but it can make sense to do so temporarily during performance-critical parts of the game. Again, it is critical to use the profiler to confirm that the memory allocated during those phases will not dangerously inflate the heap.

Techniques for allocating unmanaged memory

The techniques for working with the garbage collector discussed so far are likely sufficient in the vast majority of cases. However, if you absolutely need to dynamically allocate memory without the cost of the garbage collector, we’ll cover a few of those options now.

Native collections

If you’ve used Unity’s C# Jobs system, you’re already familiar with their native collection types. The NativeArray<T> is a convenient way to allocate unmanaged heap memory. Unmanaged heap allocations are completely ignored by the garbage collector, which means the responsibility of deallocating the memory is returned to the developer. NativeArray<T> itself is a struct which holds a pointer to the heap memory, so copies of a NativeArray instance will point to the same data. NativeArray<T> implements the IDisposable interface, so you must either call Dispose() directly or via a using statement in order to deallocate the memory. The heap memory will be leaked if the last copy of the NativeArray<T> goes out of scope before it is disposed.

Native collections can only be used with unmanaged types, so you’ll be limited to built-in value types, pointer types, and custom structs which contain only fields of other unmanaged types.

// Valid
NativeArray<Vector3> positions = new NativeArray<Vector3>(128, Allocator.Persistent);

// Invalid
NativeArray<Transform> transforms = new NativeArray<Transform>(128, Allocator.Persistent);

Unity’s implementation of native collections comes with some built-in safety checks to warn of memory leaks, but using these types is still a bit less safe than using managed arrays. If you need to squeeze out every last drop of performance in a particularly hot loop, there are unsafe variants of the native collections which omit the safety checks.

Using spans and the stackalloc keyword

Like NativeArray<T>, arrays allocated using the stackalloc keyword are limited to arrays of unmanaged types. Unlike NativeArray<T>, stack-allocated arrays do not use heap memory at all. Stack-allocated arrays can be assigned to variables of type Span<T>. If you’re not familiar with spans, think of them like a pointer with a length. Spans have been growing in popularity and much of the .NET API has been updated to work with them. The same is unfortunately not true for the Unity API, but they can still be quite useful in your own code.

Since stack-allocated arrays are extremely fast to allocate and do not require you to explicitly deallocate them, they are most useful for small scratch buffers needed to perform some calculation inside a function. Keeping the data on the stack also benefits data locality. If you’ve been following Unity’s work on their Data Oriented Tech Stack, you’ve seen examples of how much data locality can affect performance.

Significant caution is required when performing dynamic allocations on the stack. Stack size is limited, so allocating too much on the stack can cause a StackOverflowException and terminate your app. Be particularly careful to validate the size of the allocation when it is passed as a parameter to a function. Avoid using stackalloc inside loops. These types of allocations are deallocated when the function returns, not when they go out of scope. That means a 128 byte allocation in a loop that runs 1000 times actually allocates 128 kilobytes. Instead, allocate the buffer outside the loop and reuse it for each iteration.

public string IncrementCharacters(string input)
{
		if (string.IsNullOrEmpty(input)) return input;

	// If the input string is short we will allocate the buffer directly on the stack, otherwise we'll use a
	// managed char array.
	Span<char> buffer = input.Length < 64 ? stackalloc char[input.Length] : new char[input.Length];

	for (int i = 0; i < input.Length; ++i)
	{
		buffer[i] = (char)(input[i] + 1);
	}

	// Note that we cannot return `buffer` here directly. If it is allocated on the stack, it will be deallocated
	// as soon as this function returns.
	return new string(buffer);
}

You can use stackalloc even in versions of Unity where Span<T> is not available, but you’ll need to use raw pointers inside an unsafe scope. Be careful though because pointers introduce many new ways to break your app. For instance, while spans will do bounds-checking on your index since it has a length value, pointers will not protect you. In general, reserve this type of optimization for only the most extreme cases.

unsafe
{
	char* bufferPtr = stackalloc char[64];
    
for (int i = 0; i < 64; ++i)
	{
		// No bounds checking here, so be careful!
		bufferPtr[i] = (char)(input[i] + 1);
	}

	return new string(bufferPtr);
}

Fixed size buffers

The last allocation strategy we’ll cover is fixed size buffers, which are C-style arrays of unmanaged types that are declared as fields of a struct. Fixed buffers are allocated as part of the struct itself, whereas a managed array field would be a reference to an array allocated on the heap. As the fixed name implies, the size of these buffers must be a compile-time constant. Note the slightly different syntax with the [] after the field name rather than the type.

// This struct is 64 bytes because data is stored directly in the struct
public unsafe struct FixedBuffer
{
	public fixed byte data[64];
}

// This struct is 8 bytes (depending on platform) because only a reference to the data is stored in the struct
public struct ManagedBuffer
{
	public byte[] data;
}

Because the data is stored inline as part of the struct, the semantics are a bit different than the options we’ve seen so far. A copy of a struct instance with a fixed size buffer will create a full copy of the data rather than pointing to the same data.

Key takeaways for avoiding garbage collector spikes in Unity

While memory management in C# is automatic, there are many ways to cause allocation-related slowdowns that will impact the performance of your Unity game. In this post, we’ve covered several tips that you can use today to reduce garbage collector spikes. When working with managed memory, you can avoid unnecessary allocations, leverage object pooling, and manually time collections. We also touched on a few advanced ways to better allocate unmanaged memory, such as native collections, spans and the stackalloc keyword, and fixed size buffers.

These tips will help you achieve better performance when building your Unity games. Another challenge is maintaining good user experiences at scale once you go from the profiler into production. Embrace allows Unity developers to create amazing mobile experiences with 100% unsampled data across every user journey. Get started for free today.