PostgreSQL Schemas Implementation Guide
✅ IMPLEMENTATION STATUS: COMPLETED
This document outlines the implementation of PostgreSQL schemas to share a single Render database between production and staging environments. This has been implemented in the foe_foundry_db package.
Overview
Instead of paying for separate database instances, we use PostgreSQL's native schema feature to logically separate production and staging data within the same database instance.
Schemas implemented:
- prod - Production environment tables
- stage - Staging environment tables
- public - Development environment (default schema)
Implementation Location: The database code has been extracted to a dedicated foe_foundry_db package with built-in schema support.
Environment Configuration
Environment Variables
✅ IMPLEMENTED - Direct schema naming via DATABASE_SCHEMA
Each environment sets the DATABASE_SCHEMA variable to specify which schema to use:
# Production deployment
DATABASE_SCHEMA=prod
# Staging deployment
DATABASE_SCHEMA=stage
# Local development (no schema, uses default 'public')
# DATABASE_SCHEMA is not set or empty
Schema Mapping
The system directly uses the DATABASE_SCHEMA value as the schema name:
- DATABASE_SCHEMA=prod → prod schema
- DATABASE_SCHEMA=stage → stage schema
- No DATABASE_SCHEMA → public schema (PostgreSQL default)
This provides direct control over schema naming without intermediary mapping logic.
Implementation Details
✅ COMPLETED - All code changes have been implemented in the foe_foundry_db package.
Key Changes Made
- Created
foe_foundry_dbpackage - Dedicated database package with: - Database models (
User,AnonymousSession) - Connection management with schema support
- Repository pattern for data access
-
Migration and query storage
-
Schema support - Uses
DATABASE_SCHEMAenvironment variable directly - No complex mapping logic needed
- Direct control over schema naming
- Automatic schema creation on first use
2. Database Configuration Updates
✅ IMPLEMENTED - See foe_foundry_db package
The database configuration has been moved to the foe_foundry_db package with schema support built in.
Configuration in foe_foundry_db/database.py:
The PostgreSQL engine automatically uses the DATABASE_SCHEMA environment variable:
# Set this environment variable to use a specific schema
DATABASE_SCHEMA = os.getenv("DATABASE_SCHEMA", None) # e.g., 'prod' or 'stage'
Schema is automatically applied to connections:
elif DATABASE_URL.startswith("postgresql://"):
connect_args = {
"sslmode": "require",
"connect_timeout": 10,
}
# Add schema to search path if specified
if DATABASE_SCHEMA:
connect_args["options"] = f"-c search_path={DATABASE_SCHEMA},public"
engine = create_engine(
DATABASE_URL,
echo=False,
pool_size=10,
max_overflow=20,
pool_timeout=30,
pool_recycle=3600,
pool_pre_ping=True,
connect_args=connect_args,
)
Schema creation is automatic:
def create_db_and_tables():
"""Create database tables if they don't exist."""
try:
# If using PostgreSQL with a schema, create the schema first
if DATABASE_SCHEMA and DATABASE_URL.startswith("postgresql://"):
with Session(engine) as session:
session.exec(text(f"CREATE SCHEMA IF NOT EXISTS {DATABASE_SCHEMA}"))
session.commit()
# Create all tables
SQLModel.metadata.create_all(engine)
except Exception as e:
logging.getLogger(__name__).error(f"Failed to create database tables: {e}")
raise RuntimeError(f"Database initialization failed: {e}") from e
3. Migration Strategy
Option A: Schema-Aware Migrations (Recommended)
Create db/migrations/create_schemas.sql:
-- Create schemas for different environments
CREATE SCHEMA IF NOT EXISTS prod;
CREATE SCHEMA IF NOT EXISTS stage;
CREATE SCHEMA IF NOT EXISTS test;
-- Grant appropriate permissions
GRANT USAGE ON SCHEMA prod TO current_user;
GRANT USAGE ON SCHEMA stage TO current_user;
GRANT USAGE ON SCHEMA test TO current_user;
GRANT CREATE ON SCHEMA prod TO current_user;
GRANT CREATE ON SCHEMA stage TO current_user;
GRANT CREATE ON SCHEMA test TO current_user;
Option B: Environment-Specific Migration Script
Create scripts/migrate_to_schema.py:
#!/usr/bin/env python3
"""
Migration script to move existing tables to appropriate schemas.
Run this ONCE during deployment to move existing data.
"""
import os
from sqlalchemy import create_engine, text
def migrate_to_schema():
database_url = os.getenv("DATABASE_URL")
if not database_url or not database_url.startswith("postgresql://"):
print("This migration only works with PostgreSQL")
return
engine = create_engine(database_url)
with engine.connect() as conn:
# Create schemas
conn.execute(text("CREATE SCHEMA IF NOT EXISTS prod"))
conn.execute(text("CREATE SCHEMA IF NOT EXISTS stage"))
# Example: Move existing production tables
# Only run this on production database with existing data
tables = ["users", "anon_sessions"]
for table in tables:
# Check if table exists in public schema
result = conn.execute(text(f"""
SELECT EXISTS (
SELECT FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name = '{table}'
)
"""))
if result.scalar():
print(f"Moving {table} from public to prod schema")
conn.execute(text(f"ALTER TABLE public.{table} SET SCHEMA prod"))
conn.commit()
print("Migration completed successfully")
if __name__ == "__main__":
migrate_to_schema()
4. Testing Updates
Update test configuration to use test schema:
In test setup:
# tests/conftest.py or similar
import os
os.environ["ENVIRONMENT"] = "test"
Deployment Process
Initial Setup (One-time)
-
Create schemas in existing database:
sql CREATE SCHEMA IF NOT EXISTS prod; CREATE SCHEMA IF NOT EXISTS stage; -
Move existing production data (if any):
- Run migration script to move tables from
publictoprodschema -
Or recreate tables in new schema and migrate data
-
Deploy code changes:
- Update both production and staging deployments with new code
- Set appropriate
ENVIRONMENTvariables
Ongoing Deployments
- Staging deployment:
- Set
ENVIRONMENT=staging - Tables created in
stageschema -
No impact on production data
-
Production deployment:
- Set
ENVIRONMENT=production - Tables use
prodschema - No impact on staging data
Environment Variables Setup
Render Dashboard Configuration
Production Service:
DATABASE_SCHEMA=prod
DATABASE_URL=postgresql://user:pass@host/dbname
Staging Service:
DATABASE_SCHEMA=stage
DATABASE_URL=postgresql://user:pass@host/dbname # Same database!
Local Development:
# No DATABASE_SCHEMA set (uses 'public' schema)
DATABASE_URL=sqlite:///./foe_foundry_accounts.db # Or local PostgreSQL
Verification Steps
1. Schema Creation Verification
-- Connect to database and verify schemas exist
\dn
-- Should show:
-- Name | Owner
-- prod | your_user
-- stage | your_user
-- public| postgres
2. Table Location Verification
-- Check where tables are created
SELECT schemaname, tablename
FROM pg_tables
WHERE tablename IN ('users', 'anon_sessions');
-- Production should show 'prod' schema
-- Staging should show 'stage' schema
3. Application Verification
# Test script to verify schema isolation
from foe_foundry_db import engine
from sqlmodel import text
with engine.connect() as conn:
# This should show current search_path
result = conn.execute(text("SHOW search_path"))
print(f"Current search path: {result.scalar()}")
# This should show tables in current schema
result = conn.execute(text("""
SELECT table_name
FROM information_schema.tables
WHERE table_schema = current_schema()
"""))
print(f"Tables in current schema: {[row[0] for row in result]}")
Rollback Plan
If issues occur, you can rollback by:
- Revert code changes to use single schema
- Move tables back to public schema:
sql ALTER TABLE prod.users SET SCHEMA public; ALTER TABLE prod.anon_sessions SET SCHEMA public; - Update environment variables to not use ENVIRONMENT
Benefits of This Approach
- ✅ Cost savings: Single Render database instance
- ✅ Data isolation: Complete separation between environments
- ✅ Easy management: Standard PostgreSQL features
- ✅ Backup simplicity: Single database to backup
- ✅ Development workflow: Local dev uses SQLite, production uses schemas
Potential Issues & Solutions
Issue: Search Path Problems
Solution: Always use fully qualified table names in raw SQL, or ensure search_path is set correctly in connection.
Issue: Migration Conflicts
Solution: Make migrations schema-aware or run them per environment.
Issue: Debugging Confusion
Solution: Add schema name to logging/error messages for clarity.
Issue: Performance Concerns
Solution: Monitor query performance; schemas have minimal overhead compared to separate databases.
Testing Strategy
- Unit tests: Use
testschema or SQLite - Integration tests: Verify schema isolation works correctly
- Deployment tests: Verify both environments can deploy independently
- Data isolation tests: Ensure no cross-environment data leakage
This implementation provides a robust, cost-effective solution for multi-environment database management on Render while maintaining clean separation between production and staging data.