Type Sizes

Integers, uintsarrow-up-right, floatsarrow-up-right, and complexarrow-up-right numbers all have type sizes.

Signed Integers (No Decimal)

int  int8  int16  int32  int64

Unsigned Integers (Whole Numbers/No Decimal)

"uint" stands for "unsigned integer".

uint uint8 uint16 uint32 uint64 uintptr

Signed Decimal Numbers

float32 float64

Complex Numbers (a Complex Number Has a Real and Imaginary Part)

complex64 complex128

What's the Deal With the Sizes?

The size (8, 16, 32, 64, 128, etc) represents how many bitsarrow-up-right in memory will be used to store the variable. The "default" int and uint types refer to their respective 32 or 64-bit sizes depending on the environment of the user.

The "standard" sizes that should be used unless you have a specific performance need (e.g. using less memory) are:

  • int

  • uint

  • float64

  • complex128

circle-info

The "default" int/uint size depends on the platform (32- or 64-bit). Use the defaults unless you have a performance or memory reason to choose a specific sized type.

Converting Between Types

Some types can be converted explicitly:

Casting a float to an integer in this way truncatesarrow-up-right the floating point portion.

Which Type Should I Use?

With so many types for what is essentially just a number, developers coming from languages that only have one kind of Number type (like JavaScript) may find the choices daunting.

Prefer “Default” Types

A problem arises when we have a uint16, and the function we are trying to pass it into takes an int. We're forced to write code riddled with type conversions like:

This style of code can be slow and annoying to read. When Go developers stray from the “default” type for any given type family, the code can get messy quickly. Unless you have a good performance related reason, you'll typically just want to use the "default" types:

  • bool

  • string

  • int

  • uint

  • byte

  • rune

  • float64

  • complex128

When Should I Use a More Specific Type?

When you're super concerned about performance and memory usage.

That’s about it. The only reason to deviate from the defaults is to squeeze out every last bit of performance when you are writing an application that is resource-constrained. (Or, in the special case of uint64, you need an absurd range of unsigned integers).

You can read more on this subject herearrow-up-right if you'd like, but it's not required.