Skip to content
OVEX TECH
Education & E-Learning

Move Python FastAPI File Uploads to AWS S3

Move Python FastAPI File Uploads to AWS S3

How to Move Python FastAPI File Uploads to AWS S3

Storing uploaded files directly on your application server works for learning and testing. However, this method is not reliable for production. In production, your files might disappear if your server restarts or if you scale your application to multiple servers.

This tutorial shows you how to move your file uploads to Amazon’s Simple Storage Service (S3) using Python and the Boto3 library. This makes your file storage more durable and scalable.

What You Will Learn

This guide will walk you through setting up an AWS S3 bucket, configuring permissions, and updating your FastAPI application to upload files to S3. You will learn how to use the Boto3 library to interact with S3 and how to handle file uploads and deletions in an asynchronous FastAPI application.

Prerequisites

  • A basic understanding of Python and FastAPI.
  • An AWS account.
  • The FastAPI application code from previous parts of this series (specifically, the file upload functionality).

Step 1: Install Boto3

Boto3 is the official AWS SDK for Python. It allows your Python application to communicate with AWS services like S3. You can install it using pip or uv.

  1. Open your terminal or command prompt.
  2. Run the following command (use pip install boto3 if you are not using uv):
uv add boto3

Step 2: Set Up Your AWS S3 Bucket

You need an S3 bucket to store your files. Buckets are containers for your data in S3. You will also need to configure access settings.

  1. Log in to your AWS Management Console.
  2. Search for S3 and navigate to the S3 service.
  3. Click “Create bucket”.
  4. Choose a unique bucket name. Bucket names must be globally unique, lowercase, and cannot contain underscores. For example, fastapi-blog-uploads.
  5. Select an AWS Region for your bucket.
  6. For “Object Ownership”, keep the default “Bucket owner enforced” with ACLs disabled. This is the modern recommended setting.
  7. Crucially, under “Block Public Access settings”, uncheck “Block all public access”.
  8. Acknowledge the warning about blocking public access. This is necessary because your users’ browsers will need to read images from S3. You will control access more specifically with a bucket policy.
  9. Leave other settings as default and click “Create bucket”.

Tip: Understanding Public Access

By default, AWS blocks all public access to your buckets. For serving images to users, you need to allow public read access. You will set up a bucket policy later to ensure only specific files (like profile pictures) are publicly accessible, not your entire bucket.

Step 3: Create an S3 Bucket Policy

A bucket policy defines permissions for your bucket. You will create a policy that allows public read access to objects within a specific folder (prefix) in your bucket.

  1. Navigate to your newly created S3 bucket in the AWS console.
  2. Click on the “Permissions” tab.
  3. Scroll down to the “Bucket policy” section and click “Edit”.
  4. Paste the following JSON policy, replacing YOUR-BUCKET-NAME with your actual bucket name. This policy allows s3:GetObject (read) actions for any object within the profile-pics/ prefix.
{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Sid": "PublicReadGetObject",
 "Effect": "Allow",
 "Principal": "*",
 "Action": "s3:GetObject",
 "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/profile-pics/*"
 }
 ]
}
  1. Click “Save changes”.

Warning: Bucket Name Accuracy

Ensure the bucket name in the policy exactly matches your bucket’s name. If you encounter an error, wait a few seconds for AWS to propagate the changes and try again.

Step 4: Create an IAM Policy and User

Your application needs permissions to upload and delete files in your S3 bucket. You will create an IAM (Identity and Access Management) policy for these actions and then create an IAM user that your application will use.

  1. In the AWS console, search for “IAM” and navigate to the IAM service.
  2. In the left navigation pane, click “Policies”.
  3. Click “Create policy”.
  4. Select the “JSON” tab and paste the following policy. Replace YOUR-BUCKET-NAME with your bucket’s name. This policy grants s3:PutObject (upload) and s3:DeleteObject (delete) permissions for objects within the profile-pics/ prefix.
{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Sid": "AllowPutDelete",
 "Effect": "Allow",
 "Action": [
 "s3:PutObject",
 "s3:DeleteObject"
 ],
 "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/profile-pics/*"
 }
 ]
}
  1. Click “Next: Tags” (optional), then “Next: Review”.
  2. Give the policy a name, like fastapi-blog-s3-policy, and click “Create policy”.
  3. Now, create an IAM user: In the IAM console, click “Users” in the left navigation pane.
  4. Click “Create user”.
  5. Enter a username, for example, fastapi-blog-s3.
  6. Under “Access key type”, select “Applications running outside of AWS” and click “Next”.
  7. On the “Set permissions” page, select “Attach policies directly”.
  8. Search for and select the policy you just created (fastapi-blog-s3-policy).
  9. Click “Next” through the remaining steps and click “Create user”.
  10. On the success page, click “Create access key”. AWS will show you the Access Key ID and Secret Access Key only once. Copy both of these values immediately and store them securely.

Expert Note: Least Privilege

You are granting only the necessary permissions (upload and delete) to the IAM user.

This follows the principle of least privilege, which is a security best practice. Your application does not need to download files from S3; it only needs to upload and delete them.

Step 5: Configure Environment Variables

Store your AWS credentials and bucket information in your application's environment variables for security.

  1. Open your .env file.
  2. Add the following variables, replacing the placeholders with your actual bucket name, region, Access Key ID, and Secret Access Key:
AWS_S3_BUCKET_NAME=your-bucket-name
AWS_S3_REGION=your-aws-region (e.g., us-east-1)
AWS_ACCESS_KEY_ID=your-access-key-id
AWS_SECRET_ACCESS_KEY=your-secret-access-key

Warning: Secure Your Credentials

Ensure your .env file is listed in your .gitignore file to prevent accidentally committing your credentials to version control. This is a critical security step.

Step 6: Update FastAPI Configuration

Load the S3 configuration from your environment variables into your FastAPI application settings.

  1. Open your config.py file.
  2. Add the following settings to load your S3 configuration. Make sure the indentation is correct.

from pydantic_settings import BaseSettings

class Settings(BaseSettings):
 # ... Other settings ... MAX_UPLOAD_SIZE_BYTES: int = 5 * 1024 * 1024 # 5MB

 AWS_S3_BUCKET_NAME: str | None = None
 AWS_S3_REGION: str | None = None
 AWS_ACCESS_KEY_ID: str | None = None
 AWS_SECRET_ACCESS_KEY: str | None = None
 AWS_S3_ENDPOINT_URL: str | None = None # For local S3 compatible storage

settings = Settings()

Tip: Optional Credentials

The AWS credentials are marked as optional (| None).

This allows your application to use IAM roles for authentication when running within AWS services (like EC2 or ECS), eliminating the need for explicit access keys in those environments. Boto3 will automatically find credentials in such cases.

Step 7: Modify Image Utilities

Update your image processing logic to return image bytes instead of saving to disk. You'll also add functions to upload and delete files from S3.

  1. Open your image_utils.py file.
  2. Remove the import for pathlib and the PROFILE_PICS_DIR constant, as you will no longer be interacting with the local file system for uploads.
  3. Add the following imports at the top of the file:

import boto3
from starlette.concurrency import run_in_threadpool
from botocore.exceptions import ClientError
from .config import settings
  1. Create a helper function to get the S3 client. This function uses your configured credentials.

def _get_s3_client():
 if settings.AWS_ACCESS_KEY_ID and settings.AWS_SECRET_ACCESS_KEY and settings.AWS_S3_REGION:
 return boto3.client(
 "s3",
 aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
 aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY,
 region_name=settings.AWS_S3_REGION,
 endpoint_url=settings.AWS_S3_ENDPOINT_URL # Use if using local S3 compatible storage
 )
 # Fallback for when running within AWS (e.g., EC2, ECS) using IAM roles
 return boto3.client("s3", region_name=settings.AWS_S3_REGION, endpoint_url=settings.AWS_S3_ENDPOINT_URL)
  1. Modify the process_profile_image function. It should now return the processed image bytes and the filename, instead of saving to disk.

from io import BytesIO

def process_profile_image(image_bytes: bytes, filename: str) -> tuple[bytes, str]:
 # ... (image processing logic remains the same - orientation, resize, etc.) ...
 # Instead of saving to disk, save to an in-memory bytes buffer
 output = BytesIO()
 image.save(output, format="JPEG")
 output.seek(0)
 return output.read(), filename
  1. Delete the old delete_profile_image function that worked with the local file system.
  2. Add new functions for uploading and deleting files from S3.

    These functions will use boto3 and should be called within run_in_threadpool because boto3 calls are blocking.

def upload_to_s3(file_bytes: bytes, bucket_name: str, key: str):
 s3_client = _get_s3_client()
 try:
 s3_client.upload_fileobj(BytesIO(file_bytes), bucket_name, key, ExtraArgs={'ContentType': 'image/jpeg'})
 except ClientError as e:
 print(f"Error uploading to S3: {e}")
 raise

def delete_from_s3(bucket_name: str, key: str):
 s3_client = _get_s3_client()
 try:
 s3_client.delete_object(Bucket=bucket_name, Key=key)
 except ClientError as e:
 print(f"Error deleting from S3: {e}")
 raise
  1. Create asynchronous wrapper functions for these S3 operations. This is crucial for an async framework like FastAPI.

async def upload_profile_image(file_bytes: bytes, filename: str) -> None:
 bucket_name = settings.AWS_S3_BUCKET_NAME
 key = f"profile-pics/{filename}"
 await run_in_threadpool(upload_to_s3, file_bytes, bucket_name, key)

async def delete_profile_image(filename: str) -> None:
 bucket_name = settings.AWS_S3_BUCKET_NAME
 key = f"profile-pics/{filename}"
 await run_in_threadpool(delete_from_s3, bucket_name, key)

Step 8: Update User Model

Modify the image_path property in your models.py to return a full S3 URL instead of a local file path.

  1. Open your models.py file.
  2. Import settings from config.
  3. Update the image_path property to construct the S3 URL. The default image will still use a local path since it's part of your application code.

from .config import settings

# ... Other imports ...

Class User(Base):
 # ... Other fields ...

 @property
 def image_path(self) -> str:
 if self.image_file_name:
 # Construct the S3 URL
 return f"https://{settings.AWS_S3_BUCKET_NAME}.s3.{settings.AWS_S3_REGION}.amazonaws.com/profile-pics/{self.image_file_name}"
 # Return path to default image if no profile picture is uploaded
 return "/static/images/default_profile.jpg"

Step 9: Update Routes

Adjust your FastAPI routes to use the new S3 upload and delete functions, and to handle potential S3 errors.

  1. Open your routers/users.py file.
  2. Add the import for ClientError from botocore.exceptions and update your import for image_utils to include upload_profile_image.

from botocore.exceptions import ClientError
from .image_utils import process_profile_image, upload_profile_image, delete_profile_image
  1. Update the delete_user endpoint to await the asynchronous delete_profile_image function.

@router.delete("/users/me", response_model=UserResponse)
async def delete_user(
 current_user: User = Depends(get_current_user),
 db: Session = Depends(get_db),
):
 # ... (authorization and get old file name) ... Old_image_filename = current_user.image_file_name

 db.delete(current_user)
 await db.commit()

 if old_image_filename:
 # Await the async delete function
 await delete_profile_image(old_image_filename)

 return UserResponse.model_validate(current_user)
  1. Update the upload_profile_picture endpoint:

@router.post("/users/me/profile-picture", response_model=UserResponse)
async def upload_profile_picture(
 file: UploadFile = File(...),
 current_user: User = Depends(get_current_user),
 db: Session = Depends(get_db),
):
 old_image_filename = current_user.image_file_name

 try:
 contents = await file.read()
 # Process the image and get bytes and new filename
 processed_bytes, new_filename = process_profile_image(contents, file.filename)

 # Upload processed image bytes to S3
 await upload_profile_image(processed_bytes, new_filename)

 # Update user in database
 current_user.image_file_name = new_filename
 db.add(current_user)
 await db.commit()
 # Refresh to get the updated image_path property
 await db.refresh(current_user)

 # Delete the old image from S3 after successful upload and DB update
 if old_image_filename:
 await delete_profile_image(old_image_filename)

 return UserResponse.model_validate(current_user)

 except ClientError as e:
 # Handle S3 specific errors
 raise HTTPException(status_code=500, detail=f"S3 upload error: {e}")
 except Exception as e:
 # Handle other potential errors during processing or upload
 raise HTTPException(status_code=500, detail=f"An error occurred: {e}")

Important Note on Order of Operations

The order is crucial: process the image, upload it to S3, update the database, and finally, delete the old image from S3.

If the database update fails, you will still have the old image in S3, preventing data loss. If the S3 upload fails, an error is raised before the database is updated or the old image is deleted.

With these changes, your FastAPI application now reliably stores uploaded files in AWS S3, making your application more robust for production use.


Source: Python FastAPI Tutorial (Part 16): AWS S3 and Boto3 - Moving File Uploads to the Cloud (YouTube)

Leave a Reply

Your email address will not be published. Required fields are marked *

Written by

John Digweed

2,924 articles

Life-long learner.