There will be SQL

Jeremy Morony ·

I recently worked on my first Go project. As a web developer, the applications I work with are often database driven. If you are like me, you might be curious about what working with a database is like in Go. And if you're used to working with a web framework like Rails, you might be wondering about an ORM. As the title of this article implies, there aren't a lot of options. In this article we'll learn to relax and go back to working without an ORM.

Let's talk about the lack of ORMs first. There are a lot of data mappers, but nothing approaching an ORM. I asked the people I knew doing Go development about an ORM and I invariably got an answer like "Why would you want something like that?". Good question! Why did I want to do that?

It wasn't fear of SQL: I'm comfortable with SQL, transactions, isolation levels, etc. Mostly, I was working on a project with aggressive deadlines and I was hoping to reduce the number of things I had to worry about. I wanted an ORM because I didn't want to have to think too hard about how I was going to interact with the database. Things like:

  • Many ORMs provide schema migration support.
  • SQL strings embedded in non-SQL based code are hard to manage. Hello difficult refactors when tables and columns change names.
  • An ORM provides some structure around managing related entities. Additionally, features such as lazy loading are useful.

But interestingly, there are aspects of the Go language that make some of the features of an ORM difficult to build:

  • The language and community are young-ish. An ORM is complicated and takes time to develop.
  • There are no generics, building relationship support would require some form of code generation.
  • There is no support for go routine local storage (something like Java's Thread.local). This explicitly pushes transaction management out of the domain of the ORM and into the hands of the developer.
  • Go isn't traditionally object oriented. Go has structs which can have methods associated with them, but it doesn't have constructors, inheritance or other OO features.

So, I accepted that there would be SQL. What follows is a quick survey of the APIs and tools available for working with a database in Go.

Migration tools

If you've worked with something like active record then you're familiar with a schema migration tool. Schema migration tools manage incremental, reversible changes to a database schema. They are useful if you're developing a schema incrementally, especially when there's multiple developers.

There are quite a few schema migration tools written in Go. The two most popular ones appear to be:

The project I worked on used goose. It has some rough edges, but development is active.

Data Mappers

A data mapper is the M in ORM. They are responsible for mapping the columns in an SQL result set to a field in a struct/object. There's also no shortage of these:

sqlx is the library that was most recommended. It is a set of general purpose extensions to the database/sql package and provides basic marshaling of rows into structs.

However, the project I worked on used Gorp. It is ok!

Zero Values vs Nulls: sql.NullXXX and sql.Scanner

One interesting thing about using Go with SQL backed data stores is that there is an inherent mismatch between the two languages. Go doesn't support a null value for things like ints, bools, strings, or even structs. When one of those is allocated, it assumes its "zero value". The zero value for an int is zero (surprise!). For a string it is the empty string. For a struct it is a new instance of the struct with each of its members assuming its zero value.

This is arguably one of the nicer features of Go. Null checks in the middle of code can be tiresome. This feature isn't so nice when mapping values to and from a database which does support a first class NULL value. How can you know if an empty string should map literally to an empty string or an SQL null? You can't. There are several solutions to this problem:

  • Do not use NULL in your data model. All columns are defined NOT NULL with a default value. While this might initially sound appealing, there are often times when you will need to know if a value has not been provided at all (versus its default zero value) and that in those times this strategy will not hold.

  • Instead of using int values (or strings, or bools), use a pointer. Pointers can have a nil value and the driver will assign a nil when there is a NULL value provided in the result set.

  • Go provides specific types in the database/sql package for working with null values. sql.NullString (et al). The sql.NullXXX types make explicit that a NULL value is expected. However, they are a bit cumbersome to work with. Each of the sql.NullXXX types provides a field called Valid that indicates if a value was present, and a field named after the type (Int64 for example) that holds the value. These types also implement the sql.Scanner interface.


type Scanner interface {
  // Scan assigns a value from a database driver.
  //
  // The src value will be of one of the following restricted
  // set of types:
  //
  // int64
  // float64
  // bool
  // []byte
  // string
  // time.Time
  // nil - for NULL values
  //
  // An error should be returned if the value can not be stored
  // without loss of information.
  Scan(src interface{}) error
}

One last note! If no precautions are taken and there is a NULL value in your result set, you'll see an error like this:

sql: Scan error on column index 7: unsupported driver -> Scan pair: -> string

Which is saying that a nil value cannot be stored in a string type. If you encounter it, its time to start thinking about how you'd like to handle NULL data in your application.

Interacting with the database

Before you can start interacting with the database, you'll need to import the driver for your database of choice. Go has support for many databases, including Postgres, MySQL, and SQLite. There's a complete list of drivers here.

The database/sql package provides the main interface to the database. sql.DB is the main handle to the database. It is not a wrapper around an open connection to the database, but instead represents a pool of connections. If you want to test if the database is reachable, a Ping function is provided. Connections to the database are opened (or retrieved from a pool) only when a query is going to be executed. The main methods for executing queries via sql.DB are Exec, Query, QueryRow. sql.DB provides a way for starting transactions via the Begin function. The sql.Tx type provides the same three query functions (Exec, Query, QueryRow) as sql.DB.

Interestingly, Go does not provide an interface in the database/sql package that collects the common methods between sql.DB and sql.Tx. For simple programs this might not matter, but this could be inconvenient for more sophisticated programs which might want to reuse code regardless of if its explicitly participating in a transaction. Without introducing an interface, any code that queries the database would need to provide two implementations -- one that accepts sql.DB and another that accepts sql.Tx.

But, because Go detects interface implementation at runtime, you can define your own interface and use it:


type SQLRunner interface {
  Exec(query string, args ...interface{}) (Result, error)
  Query(query string, args ...interface{}) (*Rows, error)
  QueryRow(query string, args ...interface{}) *Row
}

An annotated example


package main

import (
  // database/sql is the main package for interacting with the db
  "database/sql"
  "log"

  // Import the database driver. The underscore in the import statement tells
  // the compiler not to complain that the pq package is never referenced by
  // the program. The driver package is only imported for its side effects.
  _ "github.com/lib/pq"
)

type Song struct {
  Id     int64
  Title  string
  Artist string
}

func main() {
  // The returned db object is of type sql.DB.
  //
  // It is not a wrapper around an open connection to the database. Instead,
  // it is a facility for interacting with the database. Connections to the
  // database are opened (or retrieved from a pool) only when a query is
  // going to be executed.
  db, err := sql.Open("postgres", "postgres://username:password@host/database")
  if err != nil {
    log.Fatalln(err)
  }

  // Close the handle to the database.
  // Note: Close can return an error even though the code below ignores it.
  defer db.Close()

  // Query for songs based on artist name.
  //
  // This query makes use of a place holder parameters to parameterize the SQL.
  // Using place holder values to supply input to queries will sanitize the
  // input. This is useful for protecting against SQL injection attacks. Under
  // the covers the driver is creating a prepared statement, executing the
  // query using the prepared statement, then finally deleting the prepared
  // statement. db.Query, db.QueryRow, and db.Exec() all provide this
  // functionality.
  rows, err := db.Query("SELECT id, title, artist FROM songs WHERE artist = $1", "Wire")
  if err != nil {
    log.Fatalln(err)
  }

  // Close the result set. This will ensure the connection to the database is
  // released. Again, Close can return an error but the code below ignores it.
  defer rows.Close()

  songs := make([]*Song, 0)

  // Iterate over the rows in the result set. Successfully iterating over the
  // whole result set will automatically close the connection to the database.
  for rows.Next() {
    s := &Song{}

    // Scan column values into struct fields. This operation will fail if the
    // any of the columns cannot be properly converted to the type of the
    // specified struct field.
    err := rows.Scan(&s.Id, &s.Title, &s.Artist)
    if err != nil {
      log.Fatalln(err)
    }
    songs = append(songs, s)
  }

  // There can be errors while iterating over the result set. If an error is
  // encountered, Next will return false on subsequent calls. After iterating
  // over the result set, Err should be called to check if the complete result
  // set has been read.
  if err = rows.Err(); err != nil {
    log.Fatalln(err)
  }
}

Conclusion

If you're used to working with an ORM, you might find the facilities for working with the database provided by Go to be a bit low level. While you might have to (re) familiarize yourself with SQL, the database/sql package is relatively easy to work with. And there is an active community providing a set of tools, including schema migration managers and data mappers, that make working with the database even easier.