Skip to content
_CORE
AI & Agentic Systems Core Information Systems Cloud & Platform Engineering Data Platform & Integration Security & Compliance QA, Testing & Observability IoT, Automation & Robotics Mobile & Digital Banking & Finance Insurance Public Administration Defense & Security Healthcare Energy & Utilities Telco & Media Manufacturing Logistics & E-commerce Retail & Loyalty
References Technologies Blog Know-how Tools
About Collaboration Careers
CS EN DE
Let's talk

PostgreSQL as Universal Database: Replacing Specialised Systems

19. 02. 2026 Updated: 27. 03. 2026 4 min read CORE SYSTEMSdata
PostgreSQL as Universal Database: Replacing Specialised Systems

PostgreSQL as the Universal Database: Why It Replaces Specialised Systems in 2026

PostgreSQL started as an academic project at Berkeley. Today it is the most flexible database in the world — and with each version, it absorbs more specialized systems.

PostgreSQL in 2026: What It Can Do

Relational Data (of course)

ACID transactions, MVCC, window functions, CTE, lateral joins — standard. But every relational DB can do that.

Document Database (replaces MongoDB)

JSONB type with full indexing. GIN indexes on JSON documents are faster than MongoDB for most workloads:

-- Store a document
INSERT INTO products (data) VALUES (
  '{"name": "Widget", "specs": {"weight": 1.5, "color": "blue"}, "tags": ["new", "sale"]}'::jsonb
);

-- Query a nested document
SELECT data->>'name' FROM products
WHERE data->'specs'->>'color' = 'blue'
  AND data->'tags' ? 'sale';

-- GIN index on entire JSONB
CREATE INDEX idx_products_data ON products USING GIN (data);

When it replaces MongoDB: Documents with occasional JOINs, transactional consistency, queries across nested structures. MongoDB has an advantage only in extreme horizontal scaling (sharding).

Full-text Search (replaces Elasticsearch)

Built-in full-text with Czech stemmer, ranking, highlighting:

-- Create a full-text index
ALTER TABLE articles ADD COLUMN tsv tsvector
  GENERATED ALWAYS AS (
    setweight(to_tsvector('czech', coalesce(title, '')), 'A') ||
    setweight(to_tsvector('czech', coalesce(body, '')), 'B')
  ) STORED;

CREATE INDEX idx_articles_tsv ON articles USING GIN (tsv);

-- Search with ranking
SELECT title, ts_rank(tsv, query) AS rank
FROM articles, to_tsquery('czech', 'kubernetes & produkce') query
WHERE tsv @@ query
ORDER BY rank DESC;

When it replaces Elasticsearch: Up to millions of documents, simple full-text queries, you don’t need analytical aggregations over logs. Elasticsearch is still better for log management, real-time analytics, and fuzzy matching at large scale.

Vector Database (pgvector — replaces Pinecone/Weaviate)

With the pgvector extension, you have vector embeddings right next to relational data:

-- pgvector setup
CREATE EXTENSION vector;

CREATE TABLE documents (
  id SERIAL PRIMARY KEY,
  content TEXT,
  embedding vector(1536),  -- OpenAI ada-002 dimension
  metadata JSONB
);

-- HNSW index for fast ANN search
CREATE INDEX ON documents USING hnsw (embedding vector_cosine_ops);

-- Semantic search
SELECT content, 1 - (embedding <=> $1) AS similarity
FROM documents
ORDER BY embedding <=> $1
LIMIT 10;

When it replaces Pinecone/Weaviate: RAG applications with <10M vectors where you need JOINs with relational data (metadata filtering + vector search in one query). Specialized vector DBs are better above 100M+ vectors and for multi-tenant SaaS.

Time-series (TimescaleDB — nahrazuje InfluxDB)

The TimescaleDB extension adds hypertables with automatic partitioning:

-- TimescaleDB
CREATE TABLE metrics (
  time TIMESTAMPTZ NOT NULL,
  device_id TEXT,
  temperature DOUBLE PRECISION,
  humidity DOUBLE PRECISION
);

SELECT create_hypertable('metrics', 'time');

-- Continuous aggregates (materialized views with auto-refresh)
CREATE MATERIALIZED VIEW metrics_hourly
WITH (timescaledb.continuous) AS
SELECT time_bucket('1 hour', time) AS bucket,
       device_id,
       AVG(temperature) AS avg_temp,
       MAX(humidity) AS max_humidity
FROM metrics
GROUP BY bucket, device_id;

Geospatial (PostGIS — replaces dedicated GIS)

PostGIS is the de facto standard for geospatial data. Supports 2D/3D geometries, rasters, routing:

-- Find branches within 5 km
SELECT name, ST_Distance(
  location::geography,
  ST_MakePoint(14.4378, 50.0755)::geography
) AS distance_m
FROM branches
WHERE ST_DWithin(
  location::geography,
  ST_MakePoint(14.4378, 50.0755)::geography,
  5000
)
ORDER BY distance_m;

Cache Layer (replaces Redis for some use cases)

UNLOGGED tables + index = cache without network hop:

CREATE UNLOGGED TABLE cache (
  key TEXT PRIMARY KEY,
  value JSONB,
  expires_at TIMESTAMPTZ
);

-- Automatic deletion of expired entries
CREATE INDEX ON cache (expires_at);

No, it won’t replace Redis for pub/sub or sub-millisecond latencies. But for session storage and application cache with TTL? One fewer dependency.

Architecture: How Many Specialized DBs Do You Need?

Typical Enterprise Stack 2020

PostgreSQL (relační) + MongoDB (dokumenty) + Elasticsearch (fulltext)
+ Redis (cache) + InfluxDB (metriky) + Pinecone (vektory)
= 6 databases, 6 operational costs, 6 backup strategies

Consolidated Stack 2026

PostgreSQL + pgvector + TimescaleDB + PostGIS
= 1 database, 1 backup, 1 monitoring, 1 team

Savings: 40–60% operational costs, simpler DR, less expertise needed.

When PostgreSQL Is Not Enough

  • >100TB dat — consider Citus (distributed PG) or a dedicated solution
  • Sub-millisecond cache — Redis/Dragonfly
  • Log analytics at petabyte scale — ClickHouse, Elasticsearch
  • Graph queries — Neo4j (Apache AGE extension exists, but immature)
  • Extreme write throughput — ScyllaDB, Cassandra
  • Multi-region active-active — CockroachDB, Spanner

Production Tips

Connection Pooling Is Mandatory

PostgreSQL creates a process per connection. Above 200 connections, it degrades. Use PgBouncer or Supavisor:

App → PgBouncer (transaction pooling) → PostgreSQL

Vacuum and Autovacuum

MVCC = dead rows. Autovacuum must keep up. Monitor pg_stat_user_tables.n_dead_tup and set more aggressive autovacuum for large tables.

Partitioning for Large Tables

Declarative partitioning since PG 12+. For time-series data, partition per month/week. For multi-tenant, partition per tenant.

Logical Replication for Zero-downtime Migration

Migrating from MySQL/Oracle? Logical replication enables real-time sync without downtime.

Conclusion

PostgreSQL in 2026 covers 80–90% of database needs for a typical Czech company. Before adding another specialized database to your stack, check — can PostgreSQL with an extension handle it?

One EXPLAIN ANALYZE will tell you more than the marketing page of any NoSQL database.


CORE SYSTEMS designs data architectures from PostgreSQL to distributed systems. Contact us for an audit of your database stack.

postgresqldatabasepgvectortimescaledbcitusbackenddata-engineering
Share:

CORE SYSTEMS

We build core systems and AI agents that keep operations running. 15 years of experience with enterprise IT.

Need help with implementation?

Our experts can help with design, implementation, and operations. From architecture to production.

Contact us
Need help with implementation? Schedule a meeting