Memory Management
Understanding Go's memory management is crucial for writing high-performance applications. This lesson covers stack vs heap allocation, garbage collection, memory profiling with pprof, and optimization techniques to minimize allocations and reduce GC pressure.
Figure: Go Memory Management - Stack, Heap, and Garbage Collection
Stack vs Heap Allocation
Go automatically decides where to allocate memory based on escape analysis:
Click Run to execute your code
| Aspect | Stack | Heap |
|---|---|---|
| Speed | ✅ Very fast | ⚠️ Slower |
| Management | ✅ Automatic (function scope) | ⚠️ Garbage collected |
| Size | ⚠️ Limited (per goroutine) | ✅ Large, flexible |
| Lifetime | ⚠️ Function scope only | ✅ Until GC collects |
Garbage Collection
Go uses a concurrent, tri-color mark-and-sweep garbage collector:
Click Run to execute your code
- Mark - Find all reachable objects
- Sweep - Free unreachable memory
- Concurrent - Runs alongside your program
Memory Profiling with pprof
Use pprof to identify memory hotspots and optimize allocations:
Click Run to execute your code
# Create profile
go test -memprofile=mem.prof
# Analyze profile
go tool pprof mem.prof
# Interactive commands:
top # Show top memory consumers
list main # Show line-by-line allocation
web # Visualize (requires graphviz)
Object Reuse with sync.Pool
Reduce allocations by reusing objects with sync.Pool:
Click Run to execute your code
- Use for frequently allocated, short-lived objects
- Reset object state before Put()
- Don't rely on objects staying in pool
- Pool is safe for concurrent use
- Objects may be garbage collected at any time
Understanding Escape Analysis
The compiler analyzes whether variables can stay on the stack or must escape to the heap:
Click Run to execute your code
- Pointer is returned from function
- Stored in interface{}
- Captured by closure
- Too large for stack
- Size unknown at compile time
Memory Optimization Techniques
Practical techniques to reduce memory allocations:
Click Run to execute your code
- ✓ Preallocate slices:
make([]T, 0, capacity) - ✓ Use
strings.Builderfor string concatenation - ✓ Return values instead of pointers when possible
- ✓ Reuse objects with
sync.Pool - ✓ Avoid unnecessary interface{} conversions
- ✓ Profile with pprof to find hotspots
Preventing Memory Leaks
Common memory leak patterns and how to fix them:
Click Run to execute your code
- Goroutine leaks (no cancellation mechanism)
- Unbounded caches (no eviction policy)
- Forgotten timers (not stopped)
- Global variables holding references
- Unclosed resources (files, connections)
Common Mistakes
1. Not preallocating slices
// ❌ Wrong - grows dynamically
var items []Item
for i := 0; i < 10000; i++ {
items = append(items, Item{}) // Multiple reallocations
}
// ✅ Correct - preallocate
items := make([]Item, 0, 10000)
for i := 0; i < 10000; i++ {
items = append(items, Item{}) // No reallocation
}
2. String concatenation in loops
// ❌ Wrong - creates many strings
result := ""
for i := 0; i < 1000; i++ {
result += "x" // Each += allocates
}
// ✅ Correct - use strings.Builder
var builder strings.Builder
builder.Grow(1000) // Preallocate
for i := 0; i < 1000; i++ {
builder.WriteString("x")
}
result := builder.String()
3. Ignoring escape analysis
// ❌ Wrong - unnecessary heap allocation
func getPointer() *int {
x := 42
return &x // x escapes to heap
}
// ✅ Correct - return by value
func getValue() int {
x := 42
return x // Stays on stack
}
Exercise: Memory-Efficient Cache
Task: Build a memory-efficient LRU cache.
Requirements:
- Fixed maximum size (100 items)
- Evict least recently used items
- Use sync.Pool for temporary objects
- Minimize allocations
- Thread-safe
Show Solution
package main
import (
"container/list"
"fmt"
"sync"
)
type entry struct {
key string
value interface{}
}
var entryPool = sync.Pool{
New: func() interface{} {
return &entry{}
},
}
type LRUCache struct {
mu sync.Mutex
capacity int
cache map[string]*list.Element
lru *list.List
}
func NewLRUCache(capacity int) *LRUCache {
return &LRUCache{
capacity: capacity,
cache: make(map[string]*list.Element),
lru: list.New(),
}
}
func (c *LRUCache) Get(key string) (interface{}, bool) {
c.mu.Lock()
defer c.mu.Unlock()
if elem, ok := c.cache[key]; ok {
c.lru.MoveToFront(elem)
return elem.Value.(*entry).value, true
}
return nil, false
}
func (c *LRUCache) Put(key string, value interface{}) {
c.mu.Lock()
defer c.mu.Unlock()
if elem, ok := c.cache[key]; ok {
c.lru.MoveToFront(elem)
elem.Value.(*entry).value = value
return
}
// Get entry from pool
e := entryPool.Get().(*entry)
e.key = key
e.value = value
elem := c.lru.PushFront(e)
c.cache[key] = elem
// Evict if over capacity
if c.lru.Len() > c.capacity {
oldest := c.lru.Back()
if oldest != nil {
c.lru.Remove(oldest)
oldEntry := oldest.Value.(*entry)
delete(c.cache, oldEntry.key)
// Return to pool
entryPool.Put(oldEntry)
}
}
}
func main() {
cache := NewLRUCache(100)
// Add items
for i := 0; i < 150; i++ {
cache.Put(fmt.Sprintf("key%d", i), i)
}
// Check cache
if val, ok := cache.Get("key0"); ok {
fmt.Printf("Found: %v\n", val)
} else {
fmt.Println("key0 evicted (LRU)")
}
if val, ok := cache.Get("key100"); ok {
fmt.Printf("Found: %v\n", val)
}
fmt.Println("Cache working with LRU eviction!")
}
Summary
- Stack is fast, automatic, function-scoped
- Heap is flexible, garbage collected, slower
- Escape analysis determines stack vs heap
- GC uses concurrent mark-and-sweep
- pprof identifies memory hotspots
- sync.Pool reuses objects to reduce allocations
- Preallocate slices when size is known
- strings.Builder for efficient concatenation
- Return values instead of pointers when possible
- Prevent leaks with proper resource management
What's Next?
You've mastered Go's memory management! Next, you'll learn about Escape Analysis in depth, understanding exactly when and why variables escape to the heap, and how to optimize your code accordingly.
Enjoying these tutorials?