Skip to content
OVEX TECH
Education & E-Learning

Transition Your Python App to PostgreSQL with Alembic Migrations

Transition Your Python App to PostgreSQL with Alembic Migrations

Overview

This guide will walk you through upgrading your Python application’s database from SQLite to PostgreSQL and implementing database migrations using Alembic. You’ll learn how to set up PostgreSQL locally, connect your FastAPI application, and manage database schema changes effectively. This process ensures your database is production-ready, handling updates without data loss or downtime.

Prerequisites

  • Basic understanding of Python and FastAPI.
  • Existing FastAPI project using SQLAlchemy (or a similar ORM).
  • Familiarity with using a terminal or command line.

Steps

  1. Remove SQLite and Delete Old Database File

    Since you’re moving to PostgreSQL, you no longer need the SQLite database file. Locate and delete your blog.db file (or similarly named file) from your project directory. If you have important data in this file, you’ll need to migrate it separately, which is beyond the scope of this tutorial.

  2. Install and Set Up PostgreSQL Locally

    PostgreSQL is a powerful and widely used relational database. You can install it directly on your machine or use Docker. For this tutorial, we’ll cover a local installation. On macOS, you can use Homebrew: brew install postgresql@18. On Linux, you might use apt. Windows users can download an installer. After installation, you’ll need to start the PostgreSQL service. For Homebrew, this is usually brew services start postgresql@18.

    Expert Note: Using Docker is a popular alternative that keeps your host operating system clean. A typical Docker command might look like: docker run --name postgres -e POSTGRES_PASSWORD=secret -p 5432:5432 -d postgres.

  3. Create a PostgreSQL Database User and Database

    You need to create a user and a database for your application. You can do this using the PostgreSQL command-line tool, psql. First, create a user with a password. For example, on macOS: psql postgres -c "CREATE USER blog_user WITH PASSWORD 'blog_pass';". Remember to use strong, unique passwords in production environments.

    Next, create the database and assign ownership to the user you just created: psql postgres -c "CREATE DATABASE blog OWNER blog_user;". This sets up the database that your application will connect to.

  4. Install the Python PostgreSQL Driver

    Your Python application needs a driver to communicate with PostgreSQL. The recommended modern driver is psycopg (often referred to as psycopg3). Install it using your package manager. If you use pip: pip install "psycopg[binary]". If you are using uv as in this series: uv add "psycopg[binary]". The [binary] extra helps ensure easier installation by providing pre-compiled binaries.

    This driver replaces older drivers like psycopg2 and works seamlessly with asynchronous operations in SQLAlchemy, which is crucial for FastAPI.

  5. Update Application Configuration for PostgreSQL

    Your application’s database connection URL is likely hardcoded or configured for SQLite. You need to update this to use your new PostgreSQL credentials. Open your configuration file (e.g., config.py) and add a new setting for the database URL. For example: database_url: str.

    Since this URL contains sensitive information like username and password, it should not be hardcoded. Instead, load it from environment variables or a .env file. In your .env file, add the URL in this format: DATABASE_URL="postgresql+psycopg://blog_user:blog_pass@localhost/blog". Ensure your .env file is added to your .gitignore to prevent accidental commits.

    Update your database connection logic (e.g., in database.py) to use this new URL. Remove any SQLite-specific settings like check_same_thread. The code for creating the SQLAlchemy engine will now look much cleaner, using your settings: create_async_engine(settings.database_url).

  6. Remove create_all from App Lifespan

    The create_all method from SQLAlchemy is convenient for development because it recreates tables if they don’t exist. However, it’s not a migration system. If you add a new column to a model, create_all won’t update existing tables. Deleting and recreating the database is not a viable option for production environments.

    You need to remove await base.metadata.create_all(engine) from your application’s lifespan setup. Keep the engine disposal part for proper cleanup. This change signals that database schema management will be handled by a dedicated migration tool.

  7. Install and Initialize Alembic

    Alembic is a database migration tool for SQLAlchemy. It allows you to track changes to your database schema over time, similar to version control for code. Install Alembic using your package manager. If using pip: pip install alembic. If using uv: uv add alembic.

    Initialize Alembic in your project by running: uv run alembic init --template async alembic. This command creates an alembic directory containing configuration files and a versions folder where your migration scripts will be stored. The --template async flag ensures it’s set up for asynchronous operations.

  8. Configure Alembic

    Alembic needs to know how to connect to your database and where to find your SQLAlchemy models. You’ll mainly configure two files:

    • alembic.ini: Set the sqlalchemy.url to your database connection string. However, it’s better to leave this empty here and set it programmatically in env.py using your application’s settings to avoid hardcoding credentials.
    • alembic/env.py: This file handles the migration environment. You need to import your application’s settings, your SQLAlchemy base, and your models. Specifically, set the sqlalchemy.url to settings.database_url and configure target_metadata to base.metadata. Importing your models is crucial so Alembic can detect your table structures.

    Expert Note: For your IDE to not flag unused model imports in env.py, you can add a comment like # noqa next to the import statement, as the models are imported to register them with SQLAlchemy’s metadata.

  9. Generate Your First Migration

    With Alembic configured, you can generate your initial migration script. This script will contain the SQL commands to create your existing tables based on your current models. Run the following command in your terminal: uv run alembic revision --autogenerate -m "initial schema". The --autogenerate flag tells Alembic to compare your models with the database schema and create the necessary SQL statements.

    If your database already had tables (e.g., from a previous create_all run), Alembic might think the schema is already up-to-date, resulting in an empty migration. If this happens, you may need to clear your database (e.g., by dropping and recreating the public schema in PostgreSQL) and regenerate the migration. Then, examine the generated migration file (in alembic/versions/) to ensure the upgrade function contains the correct create_table statements for your models and the downgrade function contains the corresponding drop_table statements.

  10. Apply the Migration

    Once you have reviewed and are satisfied with the generated migration script, apply it to your database. Run: uv run alembic upgrade head. The head command tells Alembic to apply all pending migrations up to the latest version. This command will execute the SQL statements defined in your migration file, creating the tables in your PostgreSQL database.

    Verify the tables were created by connecting to your PostgreSQL database using psql and listing the tables (dt). You should see your application’s tables (e.g., users, posts) and an alembic_versions table that Alembic uses to track applied migrations.

  11. Manage Schema Changes with Migrations

    Now, let’s simulate a schema change. Suppose you want to add a likes field to your Post model. Update your model definition (e.g., in models.py) by adding the new field, including default values for both Python (default=0) and the database server (server_default=text('0')). The server default is crucial for adding non-nullable columns to tables that already contain data.

    After updating the model, generate a new migration: uv run alembic revision --autogenerate -m "add likes to posts". Alembic will detect the new likes column. Review the generated migration file in alembic/versions/. It should contain an op.add_column operation.

    Apply this new migration: uv run alembic upgrade head. This will add the likes column to your posts table in PostgreSQL. You can verify this by inspecting the posts table schema using psql (e.g., d post).


Source: Python FastAPI Tutorial (Part 15): PostgreSQL and Alembic – Database Migrations for Production (YouTube)

Leave a Reply

Your email address will not be published. Required fields are marked *

Written by

John Digweed

2,447 articles

Life-long learner.