> SQL Schema migration tool for [Go](http://golang.org/). Based on [gorp](https://github.com/go-gorp/gorp) and [goose](https://bitbucket.org/liamstask/goose).
up Migrates the database to the most recent version available
```
Each command requires a configuration file (which defaults to `dbconfig.yml`, but can be specified with the `-config` flag). This config file should specify one or more environments:
```yml
development:
dialect: sqlite3
datasource: test.db
dir: migrations/sqlite3
production:
dialect: postgres
datasource: dbname=myapp sslmode=disable
dir: migrations/postgres
table: migrations
```
The `table` setting is optional and will default to `gorp_migrations`.
The environment that will be used can be specified with the `-env` flag (defaults to `development`).
Use the `--help` flag in combination with any of the commands to get an overview of its usage:
```
$ sql-migrate up --help
Usage: sql-migrate up [options] ...
Migrates the database to the most recent version available.
Options:
-config=config.yml Configuration file to use.
-env="development" Environment.
-limit=0 Limit the number of migrations (0 = unlimited).
The `up` command applies all available migrations. By contrast, `down` will only apply one migration by default. This behavior can be changed for both by using the `-limit` parameter.
The `redo` command will unapply the last migration and reapply it. This is useful during development, when you're writing migrations.
Use the `status` command to see the state of the applied migrations:
If you have complex statements which contain semicolons, use `StatementBegin` and `StatementEnd` to indicate boundaries:
```sql
-- +migrate Up
CREATE TABLE people (id int);
-- +migrate StatementBegin
CREATE OR REPLACE FUNCTION do_something()
returns void AS $$
DECLARE
create_query text;
BEGIN
-- Do something here
END;
$$
language plpgsql;
-- +migrate StatementEnd
-- +migrate Down
DROP FUNCTION do_something();
DROP TABLE people;
```
The order in which migrations are applied is defined through the filename: sql-migrate will sort migrations based on their name. It's recommended to use an increasing version number or a timestamp as the first part of the filename.
Normally each migration is run within a transaction in order to guarantee that it is fully atomic. However some SQL commands (for example creating an index concurrently in PostgreSQL) cannot be executed inside a transaction. In order to execute such a command in a migration, the migration can be run using the `notransaction` option:
```sql
-- +migrate Up notransaction
CREATE UNIQUE INDEX people_unique_id_idx CONCURRENTLY ON people (id);
## Embedding migrations with [packr](https://github.com/gobuffalo/packr)
If you like your Go applications self-contained (that is: a single binary): use [packr](https://github.com/gobuffalo/packr) to embed the migration files.
Just write your migration files as usual, as a set of SQL files in a folder.
Use the `PackrMigrationSource` in your application to find the migrations:
```go
migrations := &migrate.PackrMigrationSource{
Box: packr.NewBox("./migrations"),
}
```
If you already have a box and would like to use a subdirectory:
```go
migrations := &migrate.PackrMigrationSource{
Box: myBox,
Dir: "./migrations",
}
```
## Embedding migrations with [bindata](https://github.com/shuLhan/go-bindata)
As an alternative, but slightly less maintained, you can use [bindata](https://github.com/shuLhan/go-bindata) to embed the migration files.
The resulting `bindata.go` file will contain your migrations. Remember to regenerate your `bindata.go` file whenever you add/modify a migration (`go generate` will help here, once it arrives).
Use the `AssetMigrationSource` in your application to find the migrations:
```go
migrations := &migrate.AssetMigrationSource{
Asset: Asset,
AssetDir: AssetDir,
Dir: "db/migrations",
}
```
Both `Asset` and `AssetDir` are functions provided by bindata.