Optimizing database performance in micro services with Go and GORM and PostgreSQL
In a blog article titled
Now join us in a new blog post as we dive into how to boost database performance for micro-services using the features of GORM and PostgreSQL. Discover how these tools can take your micro-services to the next level, making them more efficient and effective than ever.
To make it easier to follow this guide, please review the code that was developed here
We’ll continue working with the same code that we made for the previous blog post. To make our optimizations meaningful, might we need to increase the complexity of the code and work with a larger volume of data. This will allow us to perform more realistic database queries, bringing our example closer to the conditions of a large-scale project. Maybe some of these modifications don't seem to make sense, but have in mind that all the operations and queries are just examples.
The scope of the blog is limited to showing only the most important optimizations that can be performed on a database, so it’s possible that some things are intentionally (or not) left out.
Let’s start with the first section, before to do any optimization in the database, we need to make sure that we have a clear understanding of all the entities and that we understand the data we are going to work with.
Efficient Database Modeling
The first step to building an efficient database is to thoroughly understand the type of database we are designing and how each entity relates to one another. By having a clear grasp of these relationships, we can make more informed decisions on how to optimize queries, ensuring that data retrieval is both efficient and effective. The first topic is the normalization.
Normalization
In Go we can manage Normalization of the database from the way we define the models of our database. Let’s check the database we’ve been working with. First, as we previously mentioned, we need to know what entities will exist and how they relate to each other.
Entities:
Planetary systems: Contains data on the different planetary systems.
Planets: Information about planets, each related to a planetary system.
Moons: Each moon is linked to a specific planet.
Asteroids: Information about asteroids, which are related to planetary system.
Astronauts: Each astronaut may visit multiple celestial bodies, and they come from a planet.
The database has the next E-R diagram:

We can define the models as follows:
PlanetarySystem.go
type PlanetarySystem struct {
gorm.Model
Name string `gorm:"not null;type:varchar(100)" json:"name"`
Planets []Planet `gorm:"foreignKey:PlanetarySystemID" json:"planets"`
Asteroids []Asteroid `gorm:"foreignKey:PlanetarySystemID" json:"asteroids"`
}
Planet.go
type Planet struct {
gorm.Model
Name string `gorm:"not null;type:varchar(100)"`
Description string `gorm:"not null;type:varchar(255)"`
Moons []Moon `gorm:"foreignKey:PlanetID"`
Mass float64 `gorm:"not null;type:float"`
Diameter float64 `gorm:"not null;type:float"`
Gravity float64 `gorm:"not null;type:float"`
OrbitPeriod float64 `gorm:"not null;type:float"`
RotationPeriod float64 `gorm:"not null;type:float"`
HasRings bool `gorm:"not null;type:boolean"`
Astronauts []Astronaut `gorm:"foreignKey:PlanetID"`
PlanetarySystemID uint `gorm:"not null;type:int"`
}
Asteroid.go
type Asteroid struct {
gorm.Model
Name string `gorm:"not null;type:varchar(100)"`
Description string `gorm:"not null;type:varchar(255)"`
PlanetarySystemID uint `gorm:"not null;type:int"`
Mass float64 `gorm:"not null;type:float"`
Diameter float64 `gorm:"not null;type:float"`
}
Astronaut.go
type Astronaut struct {
gorm.Model
Name string `gorm:"not null;type:varchar(100)"`
Age int `gorm:"not null;type:int"`
PlanetID uint `gorm:"not null;type:int"`
Missions []Mission `gorm:"foreignKey:AstronautID"`
Hours int `gorm:"not null;type:int"`
}
Mission.go
type Mission struct {
gorm.Model
AstronautID uint `gorm:"not null;type:int"`
CelestialBody string `gorm:"not null;type:varchar(100)"`
CelestialID uint `gorm:"not null;type:int"`
Date time.Time `gorm:"not null;type:timestamp"`
}
Moon.go
type Moon struct {
gorm.Model
Name string `gorm:"not null;type:varchar(100)"`
Description string `gorm:"not null;type:varchar(255)"`
PlanetID uint `gorm:"not null;type:int"`
Mass float64 `gorm:"not null;type:float"`
Diameter float64 `gorm:"not null;type:float"`
Gravity float64 `gorm:"not null;type:float"`
OrbitPeriod float64 `gorm:"not null;type:float"`
RotationPeriod float64 `gorm:"not null;type:float"`
HasAtmosphere bool `gorm:"not null;type:boolean"`
}
How to manage the relationships:
Each planetary system can have multiple Planets
and Asteroids
so, the relationships between them are many2one
. The way to create this kind of relationships in Go is using an array and a tag to specify to GORM the way they are related. Following with this example, foreignKey
indicates that Planets
are related to PlanetarySystem
through a field named PlanetarySystemID
. Check Planet.go
and Asteroid.go
to see that field. If you want to know more about this, check here:
There are other types of relationships in the database, you can check the way they are defined and seeing the ER diagram you will understand it.
In this way, relationships between entities are clearly defined, and redundancy is minimized.
But in some cases we need to improve our read operations, and normalized tables like this, could cause those operations to be inefficient. In situations like this, we can opt for de-normalize the tables which will increase reading speed.
De-normalization
The de-normalization process could help us to improve read-heavy operations. In some cases maybe, you decided to de-normalize the tables because the read queries involve a lot of joins across them.
Let’s suppose that we need to quickly access all celestial bodies (like planets, moons, asteroids) in a specific planetary system. To do that, instead of performing a lot of multiple joins, we can de-normalize the tables to make access faster.
Celestial Body model (De-normalize)
type CelestialBody struct {
ID uint
Name string
Type string // 'Planet', 'Moon', or 'Asteroid'
ParentID *uint // For moons, this is the planet's ID
PlanetarySystemID uint
}
In this way we can have the information in a grouped form.
By using the CelestialBody,
we can group Planets
, Moons
, Asteroids
, and the reading query is simpler.
Query example
A normalized query to fetch all the celestial bodies in a system seems like this:
r.db.Preload("Planets.Moons").Preload("Asteroids").Where("planetary_system_id = ?", id).Find(&planetarySystem)
A de-normalized query seems like this:
r.db.Where("planetary_system_id = ?", systemID).Find(&celestialBodies)
The difference is obvious, but we need to be careful with this approach, because we can improve the performance by combining several tables, but choose which data to de-normalize to avoid unnecessary duplication.
There are other interesting topics to be considered when we talk about “Efficient Database Modeling”, but we are not going to see them here to avoid extend to much the blog.
Now we are going to see a very common problem that can affect our database performance when we start to work with large volume of data. This problem is well-know as N+1 query problem.
Solving N+1 query problem
The most widely technique used to solve this in GORM is by using Eager Loading. It consists of querying data of several models at once instead of performing a lot of queries. Check this example:
r.db.Preload("Planets.Moons").Where("planetary_system_id = ?", id).Find(&planetarySystem)
In this query que use Preload from GORM library to make use of Eager Loading.
Using Eager Loading, we will see two queries for this code:
SELECT * FROM "moons" WHERE "moons"."planet_id" IN (201,202,203,204,205) AND "moons"."deleted_at" IS NULL
SELECT * FROM "planets" WHERE "planets"."planetary_system_id" = 5 AND "planets"."deleted_at" IS NULL
If we use the Lazy Loading, that consists basically of querying data on-demand we will have the following queries:
SELECT * FROM "planets" WHERE "planets"."planetary_system_id" = 5 AND "planets"."deleted_at" IS NULL
SELECT * FROM "moons" WHERE "moons"."planet_id" = 201 AND "moons"."deleted_at" IS NULL
SELECT * FROM "moons" WHERE "moons"."planet_id" = 202 AND "moons"."deleted_at" IS NULL
SELECT * FROM "moons" WHERE "moons"."planet_id" = 203 AND "moons"."deleted_at" IS NULL
SELECT * FROM "moons" WHERE "moons"."planet_id" = 204 AND "moons"."deleted_at" IS NULL
SELECT * FROM "moons" WHERE "moons"."planet_id" = 205 AND "moons"."deleted_at" IS NULL
The quantity of queries increases as the number of planets increases.
In some cases, it is useful to preload only data that meets certain conditions. This can be achieved by using Preload with conditions, for example.
db.Preload("Planets.Moons", "orbit_period > ?", "100").Where("planetary_system_id = ?", id).Find(&planetarySystem)
It will returns only the moons that achieves the condition orbit_period > 100
.

Now that we already know how to use the Eager Loading, let’s see another approach very useful to insert or update large quantities of entries in an unique query.
Batch inserts/updates
When we want to insert a large number of entries to a table, instead of use a query for every entry is better use Batch insert. Batch in sert is a well-know technique widely used in the database management and in GORM it can be easily done by just passing a slice to the Create
method instead of, for example, go through the slice and insert one by one. Let’s see a practical example:
Suppose that we want to add a lot of missions
to the missions
table, so we can do this:
for i := 0; i < n; i++ {
mission := factory.createMission()
factory.db.Create(mission)
}
In this way I’ll have n queries!!:

It’s much better do something like this:
baseModel := models.Mission{}
var instances []models.Mission
for i := 0; i < n; i++ {
mission := factory.createMission()
instances = append(instances, *mission)
}
if err := factory.db.Create(&instances).Error; err != nil {
return nil, err
}
This code will just create one query to put multiple missions on the table:

Great, we are now inserting, editing and reading large amounts of data efficiently, but what about data consistency?. This is a very important topic, let’s review what GORM has to offer us.
Data consistency
Let’s suppose we want to update the orbits of all the Planets
and its Moons
for a specific Planetary_system
, due to for example, the presence of a black hole. We can create a function that tries to update all the orbits like this:
func (r *planetarysystemRepository) UpdateAllTheOrbits(id uint, dto dtos.UpdateOrbitsDTO) error {
var planetarySystem models.PlanetarySystem
if err := r.db.Preload("Planets.Moons").First(&planetarySystem, id).Error; err != nil {
return err
}
var planets []models.Planet
var moons []models.Moon
for _, planet := range planetarySystem.Planets {
planet.OrbitPeriod += dto.OrbitPeriodOffset
planets = append(planets, planet)
for _, moon := range planet.Moons {
moon.OrbitPeriod += dto.OrbitPeriodOffset
moons = append(moons, moon)
}
}
if err := r.db.Save(&planets).Error; err != nil {
return err
}
if err := r.db.Save(&moons).Error; err != nil {
return err
}
return nil
}
And it works fine, we’re using Preload, but… what if the moons can not be updated? For example, suppose there is a restriction in the table that prevents the orbitPeriod
from being zero or negative and we are trying to update the orbit to an invalid value.
In that case, the update for the moons would fail, but the planets would still be updated, which causes inconsistency in the data.
To solve problems like that we need to use Transactions. The transactions allow us to execute a set of operations in a same transaction. A transaction will control that all operations within it are executed correctly, if any of the operations fail, the entire transaction will be canceled, causing the tables rollback to the previous state before this transaction was started.
To implement transactions in GORM we can do it manually, as in the next example:
func (r *planetarysystemRepository) UpdateAllTheOrbits(id uint, dto dtos.UpdateOrbitsDTO) error {
var planetarySystem models.PlanetarySystem
if err := r.db.Preload("Planets.Moons").First(&planetarySystem, id).Error; err != nil {
return err
}
tx := r.db.Begin()
defer func() {
if r := recover(); r != nil {
tx.Rollback()
}
}()
var planets []models.Planet
var moons []models.Moon
for _, planet := range planetarySystem.Planets {
planet.OrbitPeriod += dto.OrbitPeriodOffset
planets = append(planets, planet)
for _, moon := range planet.Moons {
moon.OrbitPeriod += dto.OrbitPeriodOffset
moons = append(moons, moon)
}
}
if err := tx.Save(&planets).Error; err != nil {
tx.Rollback()
return err
}
if err := tx.Save(&moons).Error; err != nil {
tx.Rollback()
return err
}
return tx.Commit().Error
}
It starts a new transaction with db.Begin(),
and in case of error it will use the Rollback()
function to rollback the transaction.
Now that we already know how to guaranteed data consistency, let’s dive into a very powerful concept that will increase the speed of our operations, the Goroutines.
Goroutines
There are cases where we might need perform an operation in multiple entries (retrieve, update, create or delete) and maybe due to the nature of this data, each operation can be done independently. This means that database operations can be performed without waiting for the previous operation to complete, a process known as asynchronous operation. The way to do asynchronous operations in Go is by using Goroutines. Let’s see an example about how to implement them.
Suppose that we want to get the total mass and total minerals of a planetary system, and the functions that do that are very complex and take a lot of processing time:
func calculatePlanetMass(planet models.Planet) float64 {
// Very long calculation
return planet.Mass
}
func calculateAsteroidMinerals(asteroid models.Asteroid) float64 {
// Very long calculation
return asteroid.MineralsQuantity
}
The normal implementation seems like this:
func (r *planetarysystemRepository) GetPlanetarySystemMassAndMinerals(id uint) (dtos.MassAndMineralsDTO, error) {
var planetarySystem models.PlanetarySystem
if err := r.db.Preload("Planets").Preload("Asteroids").First(&planetarySystem, id).Error; err != nil {
return dtos.MassAndMineralsDTO{}, err
}
var mass float64
for _, planet := range planetarySystem.Planets {
mass += calculatePlanetMass(planet)
}
var minerals float64
for _, asteroid := range planetarySystem.Asteroids {
minerals += calculateAsteroidMinerals(asteroid)
}
return dtos.MassAndMineralsDTO{Mass: mass, Minerals: minerals}, nil
}
This is the result we have if we use this approach:

It takes 1007 milliseconds to retrieve the response.
For the other hand; the same code, but using Goroutines seems like this:
func (r *planetarysystemRepository) GetPlanetarySystemMassAndMineralsWithGoroutines(id uint) (dtos.MassAndMineralsDTO, error) {
var planetarySystem models.PlanetarySystem
if err := r.db.Preload("Planets").Preload("Asteroids").First(&planetarySystem, id).Error; err != nil {
return dtos.MassAndMineralsDTO{}, err
}
massChan := make(chan float64)
mineralsChan := make(chan float64)
go func() {
var mass float64
for _, planet := range planetarySystem.Planets {
mass += calculatePlanetMass(planet)
}
massChan <- mass
}()
go func() {
var minerals float64
for _, asteroid := range planetarySystem.Asteroids {
minerals += calculateAsteroidMinerals(asteroid)
}
mineralsChan <- minerals
}()
mass := <-massChan
minerals := <-mineralsChan
return dtos.MassAndMineralsDTO{Mass: mass, Minerals: minerals}, nil
}
And now the response speed increases, it fact the time it take to return a response is half compared to the other one method:

It’s pretty cool having features like that, but we have to be careful when we use Goroutines because it could be a double-edged sword.
Always have in mind we are working in micro-services, where resources are limited. Running thousands of threads in parallel might increases the response time, but is not always the best approach, also the connection to the DB has its limits. Try always to aim to find a balance and remember the key principles we discussed earlier.
Also, when multiple Goroutines access or modify shared resources (e.g., models or shared state in memory), this can lead to race conditions and inconsistent data.
Conclusion
Congrats! Now you're ready to speed up your micro-services using some of the awesome features GORM has to offer. We’ve walked through key optimization techniques, shared practical examples, and included code you can come back to anytime you need.
We hope you found this post helpful. Keep exploring, keep learning, and have fun along the way!
Why choose Tuix?
At Tuix we have Go experts who can help you to improve the speed of your micro-services based on Go. Talk with us if you’re interesting to make your next project real:
-Happy codding!