Development Order in a Typical PostgreSQL Application
From Claude AI
When developing a PostgreSQL application with frameworks like FastAPI and SQLAlchemy, there’s a common order to developing the various components, though this can vary based on team preferences and project requirements.
Typical Development Sequence
- Database Models First
- Start by designing your data model, as this represents the core entities and relationships of your application
- Define SQLAlchemy models that map to database tables
- Run migrations to create the actual database schema
- Pydantic Schemas Second
- Create request/response schemas based on how you want to interact with your models
- These may differ from your database models (e.g., hiding certain fields, combining data from multiple models)
- Define validation rules specific to API interactions
- Service Layer Third
- Implement business logic that connects your API schemas to your database models
- Handle complex operations that might involve multiple models
- Manage transactions and error handling
- API Endpoints Last
- Define the actual routes and endpoints that use your schemas and service layer
- Implement authentication, permissions, and other web-specific concerns
Development Approach
Database-Driven Approach (Common)
This is when you know your data structure well:
Database Models → Pydantic Schemas → Service Layer → API
API-Driven Approach
When requirements are driven by API needs:
Pydantic Schemas (API contract) → Database Models → Service Layer → API
Iterative Development
In practice, most teams work iteratively:
- Create minimal versions of models and schemas
- Implement a simple service layer
- Build basic API endpoints
- Refine all layers as requirements evolve
How They Interact
- Models represent persistent storage (tables, relations)
- Schemas represent data transfer and validation (API contract)
- Service Layer contains business logic and transforms between models and schemas
The service layer typically:
- Accepts validated data from Pydantic schemas
- Queries or updates the database using SQLAlchemy models
- Transforms database results back into response schemas
- Handles business rules and complex operations
This separation of concerns allows each layer to evolve independently while maintaining a clean architecture.
Then there’s SQLModel which is designed to eliminate the need for separate database models and Pydantic models by combining them into a single model. It was created by Sebastián Ramírez (the creator of FastAPI) to bridge the gap between SQLAlchemy and Pydantic.
The key benefits of using SQLModel include:
- You define your models just once, and they work for both database operations and API validation/serialization
- You get the type safety and validation features of Pydantic
- You get the ORM capabilities of SQLAlchemy
- It reduces code duplication and makes your codebase more maintainable
Here’s a simple example:
from sqlmodel import Field, Session, SQLModel, create_engine
class Hero(SQLModel, table=True):
id: int | None = Field(default=None, primary_key=True)
name: str
secret_name: str
age: int | None = None
# Use as a Pydantic model for API requests/responses
hero_data = Hero(name="Spider-Boy", secret_name="Pedro Parqueador", age=16)
# Use as a SQLAlchemy model for database operations
engine = create_engine("sqlite:///database.db")
SQLModel.metadata.create_all(engine)
with Session(engine) as session:
session.add(hero_data)
session.commit()
In this example, the Hero
class serves as both a SQLAlchemy table model and a Pydantic model, all in one definition.