Go interfaces, but at what cost?
Philip Pearl
Posted on July 6, 2019
There's a cost associated with using interfaces. What is that cost? Let's try and work out some of it.
Let's start with the basic overhead of calling a method via an interface. We'll define a very simple interface with a single method and a very simple implementation. We'll also mark the method so it isn't inlined by the compiler. We do this so that the call to get
isn't completely removed in the direct case.
type getter interface {
get() int
}
type zero struct{}
//go:noinline
func (z zero) get() int {
return 0
}
We make a very simple benchmark, with two subtests. One calls get
via a getter
the other calls get
on the concrete zero
type directly.
func BenchmarkInterfaceCallSimple(b *testing.B) {
var z zero
var g getter
g = z
b.Run("via interface", func(b *testing.B) {
total := 0
for i := 0; i < b.N; i++ {
total += g.get()
}
if total > 0 {
b.Logf("total is %d", total)
}
})
b.Run("direct", func(b *testing.B) {
total := 0
for i := 0; i < b.N; i++ {
total += z.get()
}
if total > 0 {
b.Logf("total is %d", total)
}
})
}
Here's the result.
BenchmarkInterfaceCallSimple/via_interface-8 4.63 ns/op
BenchmarkInterfaceCallSimple/direct-8 2.44 ns/op
So there's a small overhead from making a method call via an interface. So small it won't matter except in extreme cases. Are there any other issues?
Let's try something a little different. We'll create a very simple implementation of io.Reader that fills the buffer with zeros.
type zeroReader struct{}
func (z zeroReader) Read(p []byte) (n int, err error) {
for i := range p {
p[i] = 0
}
return len(p), nil
}
Our benchmark will follow a very similar structure to the previous one. We'll test calling our implementation via an io.Reader interface and directly.
func BenchmarkInterfaceAlloc(b *testing.B) {
var z zeroReader
var r io.Reader
r = z
b.Run("via interface", func(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
var buf [7]byte
r.Read(buf[:])
}
})
b.Run("direct", func(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
var buf [7]byte
z.Read(buf[:])
}
})
}
Here's the results. Instead of the approximately 2ns overhead we now have closer to 20ns and 1 allocation. What's going on here?
BenchmarkInterfaceAlloc/via_interface-8 50000000 24.5 ns/op 8 B/op 1 allocs/op
BenchmarkInterfaceAlloc/direct-8 300000000 5.52 ns/op 0 B/op 0 allocs/op
In both cases we allocate a 7 byte buffer before each Read
call. In the direct call case, the compiler knows what the implementation of the Read
call is, so it can apply escape analysis and can tell that the buffer does not escape the stack and therefore can be allocated on the stack.
When calling via an interface the implementation is unknown at compile time, therefore the compiler must assume the buffer will escape, and therefore must allocate the buffer on the heap rather than the stack. Allocating on the heap takes longer (and in particular must grab a lock), and will cause more GC overhead.
I find it quite disappointing that any memory passed via any interface will always escape and always require heap memory. In the case of io.Reader
the english definition of the interface strongly hints that the buffer should not escape.
Implementations must not retain p.
Similarly the json.Unmarshaler
interface description implies the buffer should not actually escape.
UnmarshalJSON must copy the JSON data if it wishes to retain the data after returning.
Wouldn't it be nice if we could express this on the interface definition in a way the compiler could understand?
Posted on July 6, 2019
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.