We'll build a complete FastAPI application from scratch, exploring every aspect from basic setup to production deployment with Docker. I will try to answer most WHY ? but there will be times when I forget or ignore that. I would request to use AI tools when needed.
Folder structure:
🌳 C:\Users\soura\Documents\mine\fastapi\blog
├── 📁 core
| └── 📄 config.py
├── 📄 docker-compose.yaml
├── 📄 Dockerfile
├── 📄 main.py
└── 📄 requirements.txt
Directory: 1 File: 5
Let's start by looking at a minimal FastAPI application. Let's create a file named main.py and put the below code in it.
from fastapi import FastAPI
from typing import Dict
app: FastAPI = FastAPI(title="GenAI Blog API", description="API Powered by GenAI", version="1.0.0")
@app.get("/")
def read_root() -> Dict[str, str]:
return {"message": "Hello World"}
Yes it is exactly the bare minimum code required to run a FastAPI based API.
This simple code defines an API with a title, description, and version. It creates a single endpoint at the root URL (/
) that returns a JSON response with a "Hello World" message. But we have yet not install FastAPI and the dependencies to start the project. Let's install them. You can manually install fastapi, uvicorn (web server to serve fastapi). Or you can you the below line in requirements.txt file.
fastapi[standard]==0.115.12
#or use
fastapi==0.115.12
uvicorn==0.27.1
At this point we can simply create and activate a virtual environment and simply run fastapi dev main.py or uvicron main:app --reload
python -m venv env
.\\env\\Scripts\\activate # in windows
source env/bin/activate # in linux/mac
pip install -r requirements.txt
fastapi dev main.py
However, for n number of reasons these commands might not run in your system. Some of you might say it works on my machine, while others say "It does not work on my machine". The simplest way to fix is to use Docker.
Why even use Docker ?
Docker solves the classic "it works on my machine" problem by:
- Environment Consistency: Same runtime environment everywhere
- Dependency Isolation: No conflicts with system packages
Let’s start by creating a Dockerfile and understanding it:
FROM python:3.13-slim
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
We are going to use a very slim version of linux with python 3.13 installed on it. The PYTHONDONTWRITEBYTECODE=1: Prevents Python from writing .pyc files.
PYTHONUNBUFFERED=1:
- Forces stdout and stderr to be unbuffered
- Ensures logs appear immediately in container logs
- Critical for proper logging in containerized environments
This Dockerfile basically means: Let’s take a Linux image with python installed in it. Do some bookkeeping to avoid .pyc files and make logs appear immediately. Let’s install the requirements,txt file. That’s it.
At this point, we can provide a CMD command and we can run docker build and docker run command to start our fastapi server. However, at later sections we are going to add more services like database, maybe caching etc. So, let’s create a docker-compose.yaml file to manage everything in 1 file.
services:
web:
build: .
command: uvicorn main:app --host 0.0.0.0 --port 8000 --reload
volumes:
- .:/app
ports:
- "8000:8000"
restart: on-failure:3
Now, as soon as we run docker compose up —build. The volumes line syncs the current working directory with inside that of app folder inside of the container. The port of container is mapped to our local container and at last we specify if there is any problem try to restart/run the uvicorn command for maximum 3 times.
Now, we can visit http://127.0.0.1:8000/docs/ and play around the simple fastapi api.
One last thing that I would like to do is to move the static configurations to a config file. For which we can create core/config.py file where core is the name of a directory.
class Settings:
title: str = "GenAI Blog API"
description: str = "API Powered by Generative AI"
version: str = "1.0.0"
settings: Settings = Settings()
Now, we can refactor our main.py file as: