diff --git a/apps/studio/components/interfaces/TableGridEditor/SidePanelEditor/ColumnEditor/ColumnType.tsx b/apps/studio/components/interfaces/TableGridEditor/SidePanelEditor/ColumnEditor/ColumnType.tsx
index 297dd0d443ccf..4206f44a60745 100644
--- a/apps/studio/components/interfaces/TableGridEditor/SidePanelEditor/ColumnEditor/ColumnType.tsx
+++ b/apps/studio/components/interfaces/TableGridEditor/SidePanelEditor/ColumnEditor/ColumnType.tsx
@@ -1,3 +1,4 @@
+import type { EnumeratedType } from 'data/enumerated-types/enumerated-types-query'
import { noop } from 'lodash'
import {
Calendar,
@@ -11,8 +12,6 @@ import {
} from 'lucide-react'
import Link from 'next/link'
import { ReactNode, useState } from 'react'
-
-import type { EnumeratedType } from 'data/enumerated-types/enumerated-types-query'
import {
AlertDescription_Shadcn_,
AlertTitle_Shadcn_,
@@ -36,6 +35,7 @@ import {
TooltipTrigger,
cn,
} from 'ui'
+
import {
POSTGRES_DATA_TYPES,
POSTGRES_DATA_TYPE_OPTIONS,
@@ -165,7 +165,7 @@ const ColumnType = ({
return (
{showLabel &&
Type}
-
+
-
-
-
- Type not found.
+
+
+ Type not found.
-
+
+
{POSTGRES_DATA_TYPE_OPTIONS.map((option: PostgresDataTypeOption) => (
>
)}
-
-
-
+
+
+
diff --git a/apps/www/_alternatives/supabase-vs-convex.mdx b/apps/www/_alternatives/supabase-vs-convex.mdx
new file mode 100644
index 0000000000000..c86b911429474
--- /dev/null
+++ b/apps/www/_alternatives/supabase-vs-convex.mdx
@@ -0,0 +1,559 @@
+---
+title: Supabase vs Convex
+description: Supabase is the Postgres development platform with a SQL based Database, Auth, and Cloud Functions
+author: prashant
+tags:
+ - comparison
+date: '2026-01-28'
+toc_depth: 3
+---
+
+Both Supabase and Convex help developers build full-featured backends without managing servers. Both offer real-time data synchronization, authentication, and serverless functions. Both target the same audience: developers who want to ship fast and focus on their product instead of infrastructure.
+
+The choice between them comes down to three questions:
+
+- How complex will your data relationships become?
+- What happens when you need to scale?
+- How much do you value open standards versus proprietary convenience?
+
+This guide breaks down each platform's architecture, features, and tradeoffs to help you make an informed decision. For additional perspective, [Senacor's enterprise review](https://senacor.blog/is-backend-as-a-service-baas-enterprise-ready-a-hands-on-review-of-convex-and-supabase/) compares both platforms through a hands-on development exercise.
+
+## What is Supabase?
+
+Supabase is an open source backend platform built on PostgreSQL. It provides a hosted Postgres database with automatic REST and GraphQL APIs, real-time subscriptions, authentication, edge functions, and file storage. Because the foundation is Postgres, you get the full power of SQL, including joins, transactions, views, stored procedures, and the entire extension ecosystem.
+
+Supabase runs on standard, portable technology. Your data lives in a real Postgres database that you can connect to with any SQL client, migrate to any Postgres host, or self-host entirely. The entire platform is open source under Apache 2.0.
+
+## What is Convex?
+
+Convex is a backend platform built on a custom transactional document database. It provides real-time data synchronization, serverless functions written in TypeScript, authentication integrations, and file storage. The database uses optimistic concurrency control (OCC) to handle transactions and automatically keeps connected clients in sync.
+
+Convex prioritizes developer experience for TypeScript applications. Types flow from your database schema through your queries and mutations to your React components without code generation. The real-time sync requires zero configuration.
+
+## Developer experience and productivity
+
+Convex built its reputation on developer experience: real-time sync that works without configuration, types that flow from schema to component without a build step, and a TypeScript-first design. When writing code by hand, these matter.
+
+But how developers write code has changed. Coding agents generate boilerplate instantly. Running `supabase gen types typescript` takes seconds. Wiring type generation into CI is a single conversation with an AI assistant.
+
+**The manual friction that Convex eliminates is exactly what AI handles effortlessly.**
+
+Convex optimized for a world where developers typed every line. In that world, skipping a CLI command or avoiding configuration files saved real time. Today, the velocity difference between "zero configuration" and "AI generates the configuration" is negligible.
+
+The architectural differences are not negligible. No coding agent can work around the 32,000 document scan limit, the 1-second query timeout, or the bandwidth amplification from resending full query results. These are hard constraints in Convex's design. Postgres does not have them.
+
+There is also a debugging consideration. AI can generate your backend code, but you still need to understand it when something breaks. SQL is a 50-year-old standard. Stack Overflow has answers. Every database administrator knows it. Learning more SQL is beneficial forever and everywhere: every company, every database, every analytics tool. Convex's query DSL is proprietary, with a smaller community and less documentation. A SQL constraint like `check (age >= 18)` is self-documenting. The equivalent Convex validation, spread across mutation handlers, requires tracing through multiple functions.
+
+If AI assistance levels the playing field on developer experience, the remaining differences are architectural. Postgres scales without artificial limits. It performs without bandwidth surprises. Your data stays portable. These are not features Supabase built; they are properties Postgres earned over decades of engineering.
+
+**Debugging and preparing for production are just as important to the developer experience.**
+
+## Core architecture and database
+
+The fundamental architectural difference shapes everything else in this comparison.
+
+| Feature | Supabase | Convex |
+| --------------------- | ---------------------------------- | ------------------------------ |
+| Database engine | PostgreSQL 15+ | Custom document store (Rust) |
+| Data model | Relational (tables, rows, columns) | Document (nested objects) |
+| Query language | SQL | TypeScript functions |
+| Schema | Explicit, enforced | Schema-validated documents |
+| Concurrency model | MVCC with pessimistic locking | Optimistic concurrency control |
+| Transaction isolation | Serializable available | Serializable by default |
+
+**PostgreSQL gives Supabase decades of battle-tested reliability.** You can model complex data relationships naturally with foreign keys and joins. You can move complexity into the database using views, database functions, and constraints, instead of re-implementing the same logic across your application.
+
+**Convex's document model simplifies common patterns.** Nested data structures map directly to TypeScript types. You do not need to think about joins for simple relationships since you can embed related data directly. The tight TypeScript integration means your queries are type-checked at compile time.
+
+The tradeoff becomes clear as applications grow. Relational data modeling handles increasing complexity gracefully because SQL was designed for it. Document databases require careful denormalization and often push join logic to application code.
+
+## Real-time data synchronization
+
+Both platforms offer real-time capabilities, but with different implementation approaches.
+
+| Feature | Supabase | Convex |
+| ---------------------- | ------------------------------------ | ---------------------------- |
+| Real-time protocol | Postgres LISTEN/NOTIFY via WebSocket | Custom subscription protocol |
+| Default behavior | Opt-in per table | Enabled by default |
+| Granularity | Row-level changes | Query result changes |
+| Configuration required | Requires enabling Realtime on tables | None |
+| Bandwidth model | Sends only changed rows | Sends full query results |
+
+**Supabase requires explicit configuration but offers precise control.** You enable Realtime on specific tables and subscribe to changes in your client code. Row-level security policies filter which changes each client receives. The subscription model sends only the rows that changed, not entire result sets. If you update one row in a table with 10,000 rows, subscribers receive that one row.
+
+**Convex's real-time sync requires zero setup.** Every query automatically subscribes to updates. When underlying data changes, connected clients receive updates instantly. However, subscriptions resend complete query results when any document in the result set changes. A single field update invalidates the entire query, even when the updated field was not part of the query result.
+
+A [GitHub issue](https://github.com/get-convex/convex-backend/issues/95) documents real-world impact from a developer migrating to Convex:
+
+- Database size: 5 MB
+- Monthly read volume: 900 MB (180x the data size)
+- Projected bandwidth at 500 users: 600+ GB/month
+- Equivalent Supabase workload: approximately 3 GB/month
+
+The developer reported that "any update to a single element triggers a full re-send of the entire list, instead of just the updated row" and that paginated queries "never seem to use the cache." Convex's caching invalidates entire queries when any field in any document changes, even fields not included in the query result.
+
+The bandwidth difference matters at scale. Supabase's row-level change tracking keeps costs predictable; Convex's full-result retransmission can produce unexpected bandwidth bills.
+
+## Scaling characteristics and limits
+
+Understanding each platform's architectural limits helps you anticipate where you might hit ceilings.
+
+| Constraint | Supabase | Convex |
+| --------------------------------- | ------------------------------- | -------------------------------- |
+| Documents/rows per query | Limited by available memory | 32,000 documents per transaction |
+| Data per transaction | Limited by available memory | 16 MiB read or written |
+| Query/mutation timeout | Configurable (minutes to hours) | 1 second hard limit |
+| Action timeout | 150 seconds (Edge Functions) | 10 minutes |
+| Concurrent queries | Configurable connection pooling | 16 (Free) / 256 (Pro) |
+| Full-text search results | PostgreSQL FTS (configurable) | 1,024 results maximum |
+| Vector search results | pgvector (configurable) | 256 results maximum |
+| Documents written per transaction | Limited by transaction size | 16,000 documents |
+
+**PostgreSQL handles billions of rows.** You can add read replicas, partition tables, and scale vertically or horizontally. Connection pooling via PgBouncer or Supavisor handles thousands of concurrent connections. Analytical queries, aggregations, and complex joins work without artificial ceilings.
+
+**Convex's limits exist because of its architecture.** Optimistic concurrency control becomes expensive when transactions must validate against all concurrent transactions. The 32,000 document scan limit prevents analytical queries over large datasets. The 1-second timeout prevents long-running computations. These are not configuration options; they are hard constraints.
+
+For applications that stay within these limits, Convex performs well. For applications that need analytical queries, complex aggregations, or batch operations, the limits become blocking constraints.
+
+## Concurrency and consistency
+
+How each platform handles concurrent writes affects application behavior under load.
+
+| Aspect | Supabase | Convex |
+| -------------------------- | ------------------------------------------ | ------------------------------- |
+| Locking strategy | Pessimistic (lock rows during transaction) | Optimistic (validate at commit) |
+| Conflict handling | Waits for lock release | Retries transaction |
+| High-contention behavior | Predictable latency, ordered writes | Retry storms possible |
+| Counter/increment patterns | Native support via FOR UPDATE | Documented conflict risk |
+
+**PostgreSQL's pessimistic locking handles contention gracefully.** When multiple transactions target the same rows, they queue and execute in order. Latency increases predictably. The database handles the coordination.
+
+**Convex's OCC works well in low-contention scenarios.** Transactions execute without locks, then validate at commit time. If another transaction modified the same data, the transaction retries. Convex's own documentation warns about this pattern for counters and high-contention updates.
+
+For most applications, this distinction does not matter. For applications with hot spots (popular items, shared counters, real-time collaboration on the same document), pessimistic locking provides more predictable behavior.
+
+## TypeScript integration
+
+Both platforms support TypeScript, but the depth of integration differs.
+
+| Feature | Supabase | Convex |
+| ---------------------- | ----------------------------------- | -------------------------- |
+| Type generation | CLI generates from database schema | Native, no generation step |
+| Query type safety | Via generated types | Compile-time checked |
+| Mutation type safety | Via generated types | Compile-time checked |
+| Schema source of truth | Database | TypeScript files |
+| API layer | PostgREST (auto-generated REST API) | Custom RPC protocol |
+
+**Supabase generates types from your database schema.** You run `supabase gen types typescript` to generate TypeScript definitions. The supabase-js client uses [PostgREST](https://postgrest.org), which automatically generates a REST API from your Postgres schema. The generated types provide autocomplete and type checking for all your queries. You regenerate types after schema changes, but this integrates into standard CI/CD workflows.
+
+**Convex's TypeScript integration is tightly coupled.** You define your schema in TypeScript. Queries and mutations are TypeScript functions. Types flow from schema to function to component without any code generation step. Invalid queries fail at compile time, not runtime.
+
+For teams using multiple languages, working with BI tools, or preferring database-first design, Supabase's approach keeps the schema in the database where SQL tools expect it. For TypeScript-only teams who want zero-configuration type flow, Convex reduces friction during development.
+
+## Code verbosity
+
+The following examples illustrate the complexity difference for common operations. Simpler code is easier to understand, debug, and maintain, regardless of whether a human or AI wrote it.
+
+### Counting records
+
+**Supabase**
+
+```tsx
+const { count } = await supabase.from('tasks').select('*', { count: 'exact', head: true })
+```
+
+**Convex**
+
+Convex has no built-in count function. The recommended approach requires installing and configuring the `@convex-dev/aggregate` component.
+
+First, configure the component in `convex/convex.config.ts`:
+
+```tsx
+import { defineApp } from 'convex/server'
+import aggregate from '@convex-dev/aggregate/convex.config'
+
+const app = defineApp()
+app.use(aggregate, { name: 'aggregateTasks' })
+
+export default app
+```
+
+Then create the aggregate in your queries file:
+
+```tsx
+import { components } from './_generated/api'
+import { DataModel } from './_generated/dataModel'
+import { TableAggregate } from '@convex-dev/aggregate'
+
+const aggregateTasks = new TableAggregate<{
+ Key: number
+ DataModel: DataModel
+ TableName: 'tasks'
+}>(components.aggregateTasks, {
+ sortKey: (doc) => doc._creationTime,
+})
+
+export const countTasks = query({
+ args: {},
+ handler: async (ctx) => {
+ return await aggregateTasks.count(ctx, {})
+ },
+})
+```
+
+Without the component, you must fetch all documents and count in JavaScript, which scans every document and hits Convex's 8,192 document read limit on larger tables.
+
+### Field constraints
+
+**Supabase SQL**
+
+```sql
+create table users (
+ id uuid primary key default gen_random_uuid(),
+ username text unique not null
+ constraint proper_username check (username ~* '^[a-zA-Z0-9_]+$')
+ constraint username_length check (char_length(username) > 3 and char_length(username) < 15),
+ age int check (age >= 18)
+);
+```
+
+**Convex**
+
+Convex schemas validate types only, not values. You must write validation logic in every mutation:
+
+```tsx
+// convex/schema.ts - only type validation
+import { defineSchema, defineTable } from 'convex/server'
+import { v } from 'convex/values'
+
+export default defineSchema({
+ users: defineTable({
+ username: v.string(),
+ age: v.number(),
+ }),
+})
+
+// convex/users.ts - manual validation in mutation
+export const createUser = mutation({
+ args: { username: v.string(), age: v.number() },
+ handler: async (ctx, args) => {
+ const usernameRegex = /^[a-zA-Z0-9_]+$/
+ if (!usernameRegex.test(args.username)) {
+ throw new Error('Username must contain only letters, numbers, and underscores')
+ }
+ if (args.username.length <= 3 || args.username.length >= 15) {
+ throw new Error('Username must be between 4 and 14 characters')
+ }
+ if (args.age < 18) {
+ throw new Error('Must be 18 or older')
+ }
+ const existing = await ctx.db
+ .query('users')
+ .withIndex('by_username', (q) => q.eq('username', args.username))
+ .first()
+ if (existing) {
+ throw new Error('Username already taken')
+ }
+ return await ctx.db.insert('users', args)
+ },
+})
+```
+
+This validation must be repeated in every mutation that writes to the table. Postgres constraints are enforced at the database level regardless of how data enters.
+
+### Filtered queries
+
+**Supabase**
+
+```tsx
+const { data: completedTasks } = await supabase
+ .from('tasks')
+ .select('*')
+ .eq('status', 'completed')
+ .eq('user_id', userId)
+ .order('created_at', { ascending: false })
+ .limit(10)
+```
+
+**Convex**
+
+First, define an index in your schema:
+
+```tsx
+// convex/schema.ts
+export default defineSchema({
+ tasks: defineTable({
+ title: v.string(),
+ status: v.string(),
+ user_id: v.id('users'),
+ })
+ .index('by_status', ['status'])
+ .index('by_user_and_status', ['user_id', 'status']),
+})
+```
+
+Then query using the index:
+
+```tsx
+export const getCompletedTasks = query({
+ args: { userId: v.id('users') },
+ handler: async (ctx, args) => {
+ const tasks = await ctx.db
+ .query('tasks')
+ .withIndex('by_user_and_status', (q) =>
+ q.eq('user_id', args.userId).eq('status', 'completed')
+ )
+ .order('desc')
+ .take(10)
+ return tasks
+ },
+})
+```
+
+Filtering without an index causes full table scans, which count against Convex's document read limits and can cause unexpected billing.
+
+### The power of views
+
+Consider a common pattern: displaying a user's dashboard with their organization info, team members, and recent activity. In a relational database, you define this once as a view.
+
+**Supabase SQL (define once, query simply)**
+
+```sql
+create view user_dashboard as
+select
+ u.id as user_id,
+ u.name as user_name,
+ o.name as org_name,
+ o.plan as org_plan,
+ count(distinct t.id) as team_member_count,
+ array_agg(distinct t.name) as team_members,
+ count(distinct a.id) as recent_activity_count
+from users u
+join organizations o on u.org_id = o.id
+left join users t on t.org_id = o.id
+left join activity a on a.user_id = u.id
+ and a.created_at > now() - interval '7 days'
+group by u.id, u.name, o.name, o.plan;
+```
+
+Then query it:
+
+```tsx
+const { data } = await supabase.from('user_dashboard').select('*').eq('user_id', userId).single()
+```
+
+**Convex (multiple dependent queries, client-side assembly)**
+
+Convex has no views. You must fetch each piece separately and assemble on the client:
+
+```tsx
+// Query 1: Get the user
+export const getUser = query({
+ args: { userId: v.id('users') },
+ handler: async (ctx, args) => {
+ return await ctx.db.get(args.userId)
+ },
+})
+
+// Query 2: Get the organization (depends on user.org_id)
+export const getOrganization = query({
+ args: { orgId: v.id('organizations') },
+ handler: async (ctx, args) => {
+ return await ctx.db.get(args.orgId)
+ },
+})
+
+// Query 3: Get team members (depends on org_id)
+export const getTeamMembers = query({
+ args: { orgId: v.id('organizations') },
+ handler: async (ctx, args) => {
+ return await ctx.db
+ .query('users')
+ .withIndex('by_org', (q) => q.eq('org_id', args.orgId))
+ .collect()
+ },
+})
+
+// Query 4: Get recent activity (depends on user_id)
+export const getRecentActivity = query({
+ args: { userId: v.id('users') },
+ handler: async (ctx, args) => {
+ const weekAgo = Date.now() - 7 * 24 * 60 * 60 * 1000
+ return await ctx.db
+ .query('activity')
+ .withIndex('by_user', (q) => q.eq('user_id', args.userId))
+ .filter((q) => q.gte(q.field('created_at'), weekAgo))
+ .collect()
+ },
+})
+```
+
+In your React component, you call these sequentially and combine the results:
+
+```tsx
+const user = useQuery(api.users.getUser, { userId })
+const org = useQuery(api.orgs.getOrganization, user ? { orgId: user.org_id } : 'skip')
+const team = useQuery(api.users.getTeamMembers, org ? { orgId: org._id } : 'skip')
+const activity = useQuery(api.activity.getRecentActivity, { userId })
+```
+
+Four round trips instead of one. Each query counts against bandwidth. The client assembles what the database should have joined.
+
+## Serverless functions
+
+Both platforms provide serverless compute, with different execution models.
+
+| Feature | Supabase | Convex |
+| --------------- | ---------------------------------------- | ------------------------ |
+| Runtime | Deno (V8 isolates) | V8 isolates |
+| Language | TypeScript/JavaScript | TypeScript |
+| Cold start | ~200-500ms typical | Fast (sub-100ms typical) |
+| Execution limit | 150 seconds (wall clock) | Variable based on plan |
+| Database access | Via Supabase client or direct connection | Native, transactional |
+
+**Supabase Edge Functions run at the edge, separate from the database.** They connect to Postgres like any other client. This separation means you can call external APIs, run long computations, or handle webhooks without blocking database operations. You manage transactions explicitly when needed.
+
+**Convex functions run inside the database transaction boundary.** Queries and mutations are atomic by default. You do not think about connection management or transaction handling. The tight integration enables the real-time subscription system.
+
+Supabase's separation enables more flexibility for non-database workloads. Convex's integration enables simpler code for pure database operations but limits what you can do within a function.
+
+## Authentication
+
+Both platforms provide authentication with different architectural approaches.
+
+| Feature | Supabase | Convex |
+| -------------------------- | ---------------------------------------------------- | -------------------------------------- |
+| Built-in auth | Yes, full auth system | Integrations only (Clerk, Auth0, etc.) |
+| Third-party auth providers | Supported (Clerk, Auth0, Firebase Auth, AWS Cognito) | Required |
+| Social providers | 20+ built-in | Via third-party integration |
+| Enterprise SSO | SAML 2.0 built-in | Via third-party integration |
+| Email/password | Built-in | Via integration |
+| Magic links | Built-in | Via integration |
+| Phone/SMS | Built-in (Twilio, MessageBird, Vonage) | Via integration |
+| Anonymous auth | Built-in | Via integration |
+| Row-level security | Native database integration | Manual checks in function code |
+
+**Supabase Auth works out of the box.** Built-in social providers include Apple, Azure, Bitbucket, Discord, Facebook, Figma, GitHub, GitLab, Google, Kakao, Keycloak, LinkedIn, Notion, Slack, Spotify, Twitch, Twitter, WorkOS, and Zoom. Enterprise customers get SAML 2.0 single sign-on. Phone authentication works with Twilio, MessageBird, or Vonage. Anonymous sign-ins let users try your app before creating an account. For React and TypeScript developers, [supabase.com/ui](https://supabase.com/ui) provides one-command auth component installation.
+
+Supabase also supports third-party auth providers like Clerk, Auth0, Firebase Auth, and AWS Cognito if you prefer them. The difference is that Supabase gives you a choice; Convex requires a third-party provider.
+
+User sessions integrate directly with Row Level Security, so database policies can reference the authenticated user without additional code. One import, one configuration, and auth works.
+
+**Convex delegates authentication to third-party providers.** The recommended setup with Clerk requires multiple integration steps and adds cost (at 100,000 MAUs, Clerk runs approximately $1,825/month while Supabase Auth is included in Pro):
+
+1. Configure Clerk and create a JWT template for Convex
+2. Create a server-side auth config file in your Convex folder
+3. Wrap your app in a custom client component (required for Next.js App Router because ConvexProviderWithClerk cannot run in Server Components)
+4. Use Convex's auth hooks instead of Clerk's hooks (useConvexAuth instead of useAuth)
+5. Use Convex's auth components instead of Clerk's components (Authenticated instead of SignedIn)
+6. Set up a webhook endpoint for user synchronization
+7. Implement authorization checks in every function that needs them
+
+[Clerk's documentation](https://clerk.com/docs/guides/development/integrations/databases/convex) notes that "with Next.js App Router, things are a bit more complex" because the provider must run in a Client Component while layout.tsx is a Server Component. You must create wrapper components to bridge this gap.
+
+For teams with existing auth infrastructure, Convex's approach provides flexibility. For teams starting fresh, Supabase's built-in auth reduces dependencies and setup time.
+
+## Extension ecosystem
+
+Postgres has a large extension ecosystem. Supabase supports over 50 extensions, though not all Postgres extensions are available.
+
+| Capability | Supabase | Convex |
+| --------------------- | --------------------------------------------------------------- | ----------------------------------------- |
+| Geospatial queries | PostGIS, pgRouting | Not available |
+| Vector/AI embeddings | pgvector | Built-in vector search (256 result limit) |
+| Time-series data | Not available (TimescaleDB deprecated) | Not available |
+| Full-text search | PostgreSQL FTS, pgroonga | Built-in search (1,024 result limit) |
+| Graph queries | Not available | Not available |
+| Foreign data wrappers | Wrappers extension (Stripe, Firebase, S3, BigQuery, and others) | Not applicable |
+| Scheduled jobs | pg_cron | Built-in scheduling |
+
+**Supabase provides access to 50+ Postgres extensions.** PostGIS handles geospatial workloads. pgvector powers AI embeddings and similarity search. pg_cron schedules recurring jobs. The Wrappers extension connects to external data sources like Stripe, Firebase, S3, and BigQuery. Some extensions like TimescaleDB have been deprecated on newer Postgres versions.
+
+**Convex includes search and vector capabilities** but cannot extend beyond its core feature set. The built-in features work within documented limits (256 results for vector search, 1,024 for text search). If you need functionality Convex does not provide, you must export data to external systems.
+
+For geospatial workloads or AI applications with large result sets, Supabase's extension ecosystem provides more flexibility. For standard search and vector use cases within Convex's limits, the built-in features may be sufficient.
+
+## Pricing and cost predictability
+
+Understanding pricing models helps avoid surprises at scale.
+
+| Aspect | Supabase | Convex |
+| --------------- | -------------------------------------------- | ------------------------------------------------ |
+| Free tier | 500 MB database, 1 GB storage | 1M function calls/month |
+| Pricing model | Resource-based (compute, storage, bandwidth) | Usage-based (function calls, bandwidth, storage) |
+| Predictability | Fixed costs for provisioned resources | Variable based on usage patterns |
+| Bandwidth costs | Included in plan tiers | $0.20 per GB |
+
+**Supabase pricing is primarily resource-based.** You pay for database compute (instance size), storage, and bandwidth at defined rates. Costs are predictable because they scale with provisioned resources, not request patterns.
+
+**Convex pricing is usage-based.** You pay per function call, per GB of bandwidth, and per GB of storage. This works well for low-traffic applications but can produce surprising bills when traffic spikes or when the bandwidth amplification issue applies.
+
+The bandwidth amplification issue documented in Convex's GitHub (issue #95) shows real-world impact: a developer projected 600 GB/month bandwidth costs at 500 users due to full query result retransmission. The equivalent workload on Supabase would consume roughly 3 GB.
+
+## Open source and data portability
+
+The philosophical difference matters for long-term planning.
+
+| Aspect | Supabase | Convex |
+| ----------------------- | -------------------------------- | ---------------------------------- |
+| License | Apache 2.0 (open source) | FSL Apache 2.0 (source-available) |
+| Self-hosting | Full platform self-hostable | Available (backend only) |
+| Data export | Standard pg_dump, any SQL client | API-based export |
+| Vendor lock-in | Low (portable Postgres) | Moderate (proprietary data format) |
+| Community contributions | Open to PRs | GitHub issues and PRs accepted |
+
+**Supabase is fully open source under Apache 2.0.** You can self-host the entire platform, inspect the code, and contribute improvements. Your data lives in standard Postgres, exportable with any SQL tool and importable to any Postgres host.
+
+**Convex open-sourced its backend in 2024 under the Functional Source License (FSL).** The FSL is a source-available license that converts to Apache 2.0 after two years, rather than a permissive open source license from day one. Self-hosting is available, and Convex states that the self-hosted version runs the same code as the cloud service. However, data remains in Convex's proprietary document format, so migration to another system still requires application changes.
+
+The lock-in difference goes beyond data format. Every query you write for Convex uses their proprietary DSL. That code works nowhere else. With Supabase, `supabase-js` is convenient but optional. You can use Prisma, Drizzle, raw SQL, or any Postgres client. Your query knowledge and code remain portable.
+
+For hobbyist projects, vendor lock-in may not matter. For businesses building core products, Supabase's standard Postgres format provides more straightforward migration paths. Convex has published [their own assessment](https://stack.convex.dev/how-hard-is-it-to-migrate-away-from-convex) of migration difficulty, noting that while "API-level lock-in isn't as bad as it might seem," you would be "trading simplicity for complexity by taking on additional infrastructure management."
+
+## Third-party tool compatibility
+
+Existing tooling compatibility affects development workflows.
+
+| Tool category | Supabase | Convex |
+| ------------------- | ---------------------------------------- | -------------------- |
+| SQL clients | All (DataGrip, TablePlus, psql, etc.) | None |
+| BI tools | All SQL-compatible tools | Requires data export |
+| ETL/ELT | Fivetran, Airbyte, dbt, etc. | Limited integration |
+| ORM support | Prisma, Drizzle, TypeORM, etc. | Convex client only |
+| Database migrations | Standard tools (Flyway, Liquibase, etc.) | Convex CLI only |
+
+**Supabase works with the entire SQL ecosystem.** Connect any SQL client for queries and administration. Use any BI tool for analytics. Integrate with any ETL pipeline. Choose your preferred ORM or query builder.
+
+**Convex requires its own tooling.** You cannot connect a SQL client because there is no SQL. Analytics tools cannot query directly. ORMs do not apply. This simplifies some decisions but limits flexibility.
+
+## When to choose Supabase
+
+Choose Supabase when:
+
+- You're building something that might grow into a production system
+- Your data has complex relationships that benefit from SQL joins and foreign keys
+- You need analytical queries, aggregations, or reporting over large datasets
+- You want to use the Postgres extension ecosystem (PostGIS, pgvector)
+- You prefer open source with self-hosting options
+- You need predictable pricing without bandwidth amplification concerns
+- You want to use existing SQL tools, BI platforms, and ETL pipelines
+- You want code that's easy to debug, whether you or an AI wrote it
+- You value a query language with 50 years of documentation and universal tooling
+
+## When to choose Convex
+
+Choose Convex when:
+
+- You're validating an idea and don't expect this specific codebase to become your production system
+- Your application is primarily CRUD with straightforward data relationships
+- Your data volume will stay within documented limits (32K documents per query, 1-second timeouts)
+- You accept that if the project succeeds, you may need to migrate to a different platform
+- You want to avoid learning SQL and Postgres concepts
+
+## Conclusion
+
+Convex offers a streamlined developer experience for TypeScript applications with simple data requirements. The real-time sync works without configuration, and the type safety catches errors at compile time rather than runtime. For prototypes and applications that fit within its architectural constraints, Convex gets you moving fast.
+
+Supabase offers the full power of PostgreSQL with a modern developer experience. You get real-time subscriptions, serverless functions, and authentication while keeping SQL's flexibility, the extension ecosystem, and data portability. Applications that start simple on Supabase can grow into complex production systems without re-platforming.
+
+Both platforms let you build fast. Supabase lets you keep building on the same foundation as your application grows.
diff --git a/apps/www/_blog/2026-01-15-read-replicas-vs-bigger-compute.mdx b/apps/www/_blog/2026-01-15-read-replicas-vs-bigger-compute.mdx
new file mode 100644
index 0000000000000..47e22c223235e
--- /dev/null
+++ b/apps/www/_blog/2026-01-15-read-replicas-vs-bigger-compute.mdx
@@ -0,0 +1,343 @@
+---
+title: 'When to use Read Replicas vs. bigger compute'
+description: 'A practical guide to diagnosing database slowdowns and choosing between vertical scaling and Read Replicas based on your workload, budget, and performance bottlenecks.'
+author: prashant
+date: '2026-01-15'
+categories:
+ - product
+tags:
+ - read-replicas
+ - database
+ - scaling
+ - performance
+ - postgres
+ - compute
+ - infrastructure
+toc_depth: 3
+imgSocial: 'https://zhfonblqamxferhoguzj.supabase.co/functions/v1/generate-og?template=announcement&layout=text-only©=When+to+use%0A%5BRead+Replicas%5D%0Avs.+bigger+compute'
+imgThumb: 'https://zhfonblqamxferhoguzj.supabase.co/functions/v1/generate-og?template=announcement&layout=text-only©=When+to+use%0A%5BRead+Replicas%5D%0Avs.+bigger+compute'
+---
+
+# When to use Read Replicas vs. bigger compute
+
+When your database starts slowing down, you face a choice: make your existing database bigger, or spread the load across multiple databases. Both approaches work. Neither is universally correct. The right answer depends on your workload, your budget, and where the bottleneck actually is.
+
+This post walks through how to diagnose what is causing your database to slow down, when vertical scaling (bigger compute) makes sense, when horizontal read scaling (Read Replicas) is the better path, and how to make the decision with real numbers.
+
+## The scaling decision every growing database faces
+
+Your Supabase project starts on a Small compute instance. It handles your MVP, your beta users, and your first paying customers. Then, traffic grows and response times creep up. You need to scale.
+
+Here is the quick version:
+
+| If this is you... | Do this |
+| ------------------------------------------------------------------- | ------------------------------------------ |
+| Database is slow and CPU (user processes) is consistently above 70% | Upgrade compute |
+| Analytics queries are hurting production | Add a Read Replica |
+| Users in Europe or Asia have high latency | Add a Read Replica in their region |
+| I am maxed out at 16XL and need more read capacity | Add Read Replicas |
+| My workload is mostly writes | Upgrade compute (replicas only help reads) |
+| I want the simplest solution with no code changes | Upgrade compute |
+
+The rest of this post explains how to diagnose your specific situation and make the right call with real numbers.
+
+## First, diagnose the actual problem
+
+Before choosing a scaling strategy, figure out what is actually causing the slowdown. Throwing hardware at the wrong problem wastes money. The [database performance guide](https://supabase.com/docs/guides/database/debugging-performance) covers diagnostics in depth. Here are the essentials.
+
+### Check your query patterns
+
+Run this query to see what is consuming the most time in your database:
+
+```sql
+select
+ calls,
+ mean_exec_time::numeric(10,2) as avg_ms,
+ total_exec_time::numeric(10,2) as total_ms,
+ query
+from pg_stat_statements
+order by total_exec_time desc
+limit 20;
+```
+
+This tells you where time is actually going. You might discover:
+
+- A few slow queries dominating execution time (optimize those queries first)
+- Many fast queries adding up (you need more capacity)
+- Analytics queries competing with production traffic (you need workload isolation)
+
+### Check your read/write ratio
+
+Read Replicas only help with read traffic. If your workload is write-heavy, replicas will not help. Check your ratio:
+
+```sql
+select
+ sum(seq_tup_read + idx_tup_fetch) as reads,
+ sum(n_tup_ins + n_tup_upd + n_tup_del) as writes,
+ round(
+ 100.0 * sum(seq_tup_read + idx_tup_fetch) / nullif(
+ sum(seq_tup_read + idx_tup_fetch + n_tup_ins + n_tup_upd + n_tup_del),
+ 0
+ ),
+ 1
+ ) as read_percentage
+from pg_stat_user_tables;
+```
+
+If reads are 80% or more of your traffic, Read Replicas can distribute that load. If writes dominate, you need bigger compute (or query optimization, or Supabase Queues for background processing).
+
+### Check CPU and memory utilization
+
+In the Supabase Dashboard, go to **Reports > Database**. Look at:
+
+- **CPU utilization:** Sustained above 70% means you are running hot
+- **Connection count:** Approaching limits causes connection errors
+
+These metrics tell you whether you are hitting hardware limits or software limits.
+
+## When bigger compute is the right choice
+
+Vertical scaling is the simpler path. One click in the dashboard, a brief restart, and your database has more resources. Choose bigger compute when:
+
+### Your workload is write-heavy
+
+Read Replicas cannot help with writes. All INSERT, UPDATE, and DELETE operations go to the primary database. If writes are your bottleneck, you need a bigger primary.
+
+### You have headroom in the compute tiers
+
+Supabase offers [compute tiers](https://supabase.com/docs/guides/platform/compute-and-disk) from Micro ($10/month) to 16XL ($3,730/month). If you are on Medium and experiencing slowdowns, upgrading to Large or XL is straightforward and immediate.
+
+| Current tier | Next tier | Monthly cost increase | What you get |
+| ------------ | ------------ | --------------------- | --------------------- |
+| Small ($15) | Medium ($60) | +$45 | 2x RAM (2GB to 4GB) |
+| Medium ($60) | Large ($110) | +$50 | 2x RAM, dedicated CPU |
+| Large ($110) | XL ($210) | +$100 | 2x CPU cores, 2x RAM |
+| XL ($210) | 2XL ($410) | +$200 | 2x CPU cores, 2x RAM |
+
+### Your queries are already optimized
+
+Before scaling hardware, check that your queries use indexes effectively. An index is like the index at the back of a book. Instead of reading every page to find a topic, you look it up in the index and jump straight to the right page. Postgres works the same way. Without an index, Postgres reads every row in a table to find matching data. This is called a sequential scan. With an index, Postgres looks up which rows match and jumps directly to them. The difference can be dramatic: a query that takes 30 seconds without an index might take 30 milliseconds with one.
+
+Run `EXPLAIN ANALYZE` on slow queries to see if Postgres is using indexes or doing sequential scans. If you see "Seq Scan" on a large table, you probably need an index. The [Database Advisor](https://supabase.com/docs/guides/database/database-advisors) in your Supabase Dashboard can also identify missing indexes and other performance issues automatically.
+
+**Quick index guidelines:**
+
+- Add indexes on columns used in WHERE clauses
+- Add indexes on columns used in JOIN conditions
+- Add indexes on columns used in ORDER BY
+- Compound indexes work for queries that filter on multiple columns
+- Do not add indexes on every column. Each index slows down writes and uses disk space.
+
+Sometimes a $0 index beats a $200/month compute upgrade.
+
+**Analyze your queries with Claude Code:**
+
+If you use [Claude Code](https://claude.ai/claude-code) with the [Supabase MCP Server](https://github.com/supabase-community/supabase-mcp), you can ask Claude to analyze your database and suggest indexes:
+
+```
+Analyze my Supabase database for missing indexes:
+
+1. Query pg_stat_statements to find the 20 slowest queries by total execution time
+2. For each slow query, run EXPLAIN ANALYZE and check for sequential scans on tables with more than 10,000 rows
+3. Suggest CREATE INDEX statements for any missing indexes
+4. Estimate the performance improvement for each suggested index
+
+Show me the slow queries, what is causing them to be slow, and the exact CREATE INDEX statements I should run.
+```
+
+Claude will connect to your database, run the diagnostics, and give you specific index recommendations.
+
+### You need the simplest solution
+
+Vertical scaling requires no code changes. No connection string updates. No routing logic. If you need immediate relief and simplicity matters, upgrade compute first.
+
+## When Read Replicas are the right choice
+
+[Read Replicas](https://supabase.com/blog/introducing-read-replicas) add complexity but unlock capabilities that vertical scaling cannot provide. Choose Read Replicas when:
+
+### Analytics queries are hurting production
+
+This is the most common reason teams adopt Read Replicas. The pattern is familiar: your data team connects Metabase or Looker to your production database. They write a query that joins three tables and scans six months of orders. The query runs for 45 seconds. During those 45 seconds, your production database is working hard on that analytical query instead of serving your application. API response times spike and your users notice.
+
+**Moving analytical queries to Read Replicas**
+
+The problem is that Postgres is optimized for transactional workloads: small, fast queries that touch a few rows at a time. Analytical queries do the opposite. They scan millions of rows, aggregate data, and compute statistics. These two workload types compete for the same CPU, memory, and I/O.
+
+Read Replicas solve this by isolation. Point your analytics tools at a replica. The replica handles the heavy queries. Production stays fast.
+
+```
+Production traffic -> Primary database
+Analytics traffic -> Read Replica
+```
+
+Your data team gets the access they need without risking production stability. The replica might slow down during a heavy report, but your customers never notice because production is untouched.
+
+**Configuring replicas for long-running queries**
+
+By default, Postgres may cancel long-running queries on a replica if they conflict with incoming replication data. A 10-minute analytics report can get terminated when the primary updates rows the query is reading.
+
+Two settings control this behavior:
+
+- `max_standby_streaming_delay`: How long the replica waits before canceling conflicting queries (default: 30 seconds). Increase this for longer analytics queries. The trade-off is more replication lag during heavy queries.
+- `max_standby_archive_delay`: Same concept for WAL archive replay.
+
+If queries still get canceled, enable `hot_standby_feedback`. This tells the primary not to vacuum rows the replica is reading. Use this as a last resort because it causes table bloat on the primary. Dead rows accumulate, increasing disk usage and potentially slowing primary queries.
+
+Start with the delay settings. Only enable `hot_standby_feedback` if you are still seeing query cancellations.
+
+### You have hit the 16XL ceiling
+
+Once you reach 16XL (64 CPU cores, 256GB RAM), vertical scaling stops. The only way to add read capacity is horizontal: spread reads across replicas.
+
+### You have users in multiple regions
+
+A database in `us-east-1` serves US users with low latency. European users experience 100-150ms of network latency on every query. That adds up.
+
+Deploy a Read Replica in `eu-west-1`. Supabase's API load balancer automatically routes GET requests to the nearest replica. European users hit the European replica. Latency drops.
+
+### You need cost-effective read scaling
+
+Compare the cost of adding read capacity:
+
+| Approach | Starting point | Monthly cost | Read capacity added |
+| --------------- | -------------- | -------------- | ----------------------- |
+| Upgrade compute | XL ($210) | $410 (+$200) | ~2x on same instance |
+| Add replica | XL ($210) | $420 (+$210) | 2x across two instances |
+| Upgrade compute | 4XL ($960) | $1,870 (+$910) | ~2x on same instance |
+| Add replica | 4XL ($960) | $1,920 (+$960) | 2x across two instances |
+
+At lower tiers, the costs are similar. At higher tiers, replicas become more cost-effective for read scaling because you can add a smaller replica instead of doubling your primary.
+
+A 4XL primary with a 2XL replica ($960 + $410 = $1,370) costs less than an 8XL primary ($1,870) and may handle more read traffic.
+
+### You want redundancy
+
+Read Replicas provide a warm standby. If your primary has issues, your replica has a recent copy of your data. This is not automatic failover (that is coming with [Multigres](https://supabase.com/blog/multigres-vitess-for-postgres)), but it reduces your blast radius.
+
+## The decision framework
+
+Use this flowchart to decide:
+
+```mermaid
+flowchart TD
+ A[Database slowing down] --> B{CPU above 70% sustained?}
+ B -->|No| C[Monitor, do not scale yet]
+ B -->|Yes| D{Queries optimized? Indexes in place?}
+ D -->|No| E[Run EXPLAIN ANALYZE
Add missing indexes
Optimize first]
+ E --> D
+ D -->|Yes| F{Workload 80%+ reads?}
+ F -->|No| G[Upgrade compute
Replicas will not help writes]
+ F -->|Yes| H{Already at 16XL?}
+ H -->|Yes| I[Read Replicas
Only horizontal option left]
+ H -->|No| J{Need workload isolation
or geo-distribution?}
+ J -->|Yes| K[Read Replicas]
+ J -->|No| L[Either works
Compute is simpler
Replicas scale further]
+```
+
+## Real-world scenarios
+
+### Scenario 1: SaaS application with growing traffic
+
+**Situation:** B2B SaaS on Large compute ($110/month). CPU at 75%. Workload is 85% reads. Users in US only. No analytics workload.
+
+**Recommendation:** Upgrade to XL ($210/month). Simpler path, no code changes, and you have headroom to grow. Consider replicas later when you add analytics or international users.
+
+### Scenario 2: E-commerce with analytics team
+
+**Situation:** E-commerce platform on 2XL ($410/month). Production is fine until the analytics team runs reports. Then checkout slows down. Workload is 90% reads.
+
+**Recommendation:** Add a Read Replica ($410/month for matching 2XL). Point analytics tools at the replica. Total cost $820/month, but production and analytics are isolated. The analytics team can run any query they want without paging the on-call engineer.
+
+### Scenario 3: Global consumer app
+
+**Situation:** Consumer app on XL ($210/month). Primary in `us-east-1`. Growing user base in Europe complaining about latency. Workload is 95% reads.
+
+**Recommendation:** Add a Read Replica in `eu-west-1` ($210/month). API requests from Europe automatically route to the European replica. Total cost $420/month. European latency drops from 150ms to 20ms.
+
+### Scenario 4: High-write IoT platform
+
+**Situation:** IoT platform on 4XL ($960/month). Devices send telemetry every second. CPU at 80%. Workload is 60% writes.
+
+**Recommendation:** Upgrade to 8XL ($1,870/month). Write-heavy workloads need a bigger primary. Read Replicas would only help with the 40% that is reads. Consider [Supabase Queues](https://supabase.com/docs/guides/queues) to batch and process writes asynchronously.
+
+## What about other scaling options?
+
+Read Replicas and compute upgrades are not the only tools:
+
+### Connection pooling (Supavisor)
+
+If you are hitting connection limits but CPU is fine, enable connection pooling. Supavisor multiplexes many client connections across fewer database connections. This is free and built into every Supabase project.
+
+### Supabase ETL and Analytics Buckets
+
+Read Replicas isolate analytics from production, but the replica is still Postgres. It still uses row-based storage optimized for transactions. If your analytics queries scan tens of millions of rows, even a dedicated replica will feel slow.
+
+This is where [Supabase ETL](https://supabase.com/blog/introducing-supabase-etl) and [Analytics Buckets](https://supabase.com/blog/introducing-analytics-buckets) come in. They solve a different problem: not just isolating analytics, but running analytics on infrastructure designed for analytical workloads.
+
+**How it works:**
+
+1. Supabase ETL captures changes from your Postgres tables using change-data-capture (CDC)
+2. Changes stream in near real-time to Analytics Buckets (or BigQuery)
+3. Analytics Buckets store your data in columnar Parquet format on S3, built on Apache Iceberg
+4. You query the data with tools like DuckDB, PyIceberg, or Apache Spark
+
+Columnar storage is dramatically faster for analytical queries. A query that scans a single column across 100 million rows only reads that column, not the entire row. Compression ratios are higher. Query times drop from minutes to seconds.
+
+**When to use ETL and Analytics Buckets instead of Read Replicas:**
+
+- Your analytical queries scan millions of rows regularly
+- You need to retain historical data for years without bloating your database
+- You want 30-90% storage cost savings on large datasets
+- You need a complete audit trail with time-travel capabilities
+- Your data team wants to use specialized analytics tools (Spark, DuckDB, Python notebooks)
+
+**When Read Replicas are still the right choice:**
+
+- Your analytics queries are moderately heavy but not massive
+- You want your data team to use the same SQL they already know
+- You need the simplest possible setup with no new tools
+- You also need geo-distribution, which ETL does not provide
+
+**A common pattern: use both.** Keep 90 days of data in Postgres for fast operational queries. Stream everything to Analytics Buckets for long-term retention and heavy analytics. Your application queries Postgres. Your data team queries Analytics Buckets for historical trends. You get speed where you need it and cost efficiency where you need that.
+
+### Multigres: the future of horizontal write scaling
+
+Read Replicas scale reads. Bigger compute scales everything, but only so far. What happens when you max out the largest compute tier and your workload is write-heavy?
+
+This is the problem [Multigres](https://supabase.com/blog/multigres-vitess-for-postgres) is designed to solve. Multigres is a database proxy layer that brings Vitess-style horizontal scaling to Postgres. Vitess powers some of the largest MySQL deployments in the world (YouTube, Slack, GitHub). Multigres adapts that architecture for Postgres.
+
+**What Multigres enables:**
+
+- Sharding: distribute data across multiple Postgres instances
+- Query routing: direct queries to the right shard automatically
+- Failover: automatic high availability without manual intervention
+- Gradual scaling: start with connection pooling, grow into sharding as needed
+
+Multigres is still early (open source under Apache 2.0, currently seeking design partners). But it represents Supabase's long-term answer to the question: what do I do when vertical scaling runs out and Read Replicas are not enough?
+
+For most workloads today, the answer is still bigger compute or Read Replicas. But if you are building something that will eventually need true horizontal write scaling, Multigres is worth watching.
+
+## Getting started
+
+If you have decided Read Replicas are right for your workload:
+
+1. Go to **Project Settings > Infrastructure** in your Supabase Dashboard
+2. Click **Add Read Replica**
+3. Select a region (same region for analytics isolation, different region for geo-distribution)
+4. Choose a compute size (can match primary or be smaller for analytics-only workloads)
+
+The replica provisions in a few minutes. You will get a dedicated connection string. For analytics tools, use that connection string directly. For application traffic, the API load balancer handles routing automatically.
+
+Read the full setup guide in [the documentation](https://supabase.com/docs/guides/platform/read-replicas).
+
+## Summary
+
+| Factor | Choose bigger compute | Choose Read Replicas |
+| -------------------- | ----------------------- | ------------------------------------------ |
+| Workload | Write-heavy or balanced | Read-heavy (80%+) |
+| Current tier | Below 16XL | At or approaching 16XL |
+| Complexity tolerance | Want simplicity | Can handle routing |
+| Use case | General scaling | Analytics isolation, geo-distribution |
+| Code changes | None required | Minimal (connection strings for analytics) |
+
+Both paths work. The right choice depends on where your bottleneck actually is. Diagnose first, then scale appropriately.
diff --git a/apps/www/_events/2026-02-25-enterprise-innovation-with-bolt.mdx b/apps/www/_events/2026-02-25-enterprise-innovation-with-bolt.mdx
new file mode 100644
index 0000000000000..efa491eb7e8ac
--- /dev/null
+++ b/apps/www/_events/2026-02-25-enterprise-innovation-with-bolt.mdx
@@ -0,0 +1,42 @@
+---
+title: 'Vibe Coding, Done Right: AI Development in Production'
+meta_title: 'Vibe Coding, Done Right: AI Development in Production'
+subtitle: >-
+ Learn how enterprise innovation teams are using AI coding tools like Bolt to build real applications on Supabase
+meta_description: >-
+ Learn how enterprise teams use AI coding tools like Bolt to build production apps. Real customer story, governance models, and security practices.
+type: webinar
+onDemand: false
+date: '2026-02-25T07:00:00.000-08:00'
+timezone: America/Los_Angeles
+duration: 45 mins
+categories:
+ - webinar
+main_cta:
+ {
+ url: 'https://attendee.gotowebinar.com/register/678318862881390429',
+ target: '_blank',
+ label: 'Register now',
+ }
+speakers: 'chris_caruso'
+---
+
+Something strange is happening in large enterprises. Non-technical employees are building production software. CEOs are shipping features. And companies are canceling SaaS contracts worth millions.
+
+In this webinar, you will learn how enterprise innovation teams are using AI coding tools like Bolt to build real applications on Supabase. We will show you how to rapidly prototype internal tools, how to work within the constraints of an existing design system, and how to empower non/semi-technical teams to build safely.
+
+## Key Takeaways
+
+- How to give non-technical teams the ability to build production software without compromising security or stability
+
+- The governance model that makes AI-assisted development safe for enterprises
+
+- Why prototypes built on the right foundation can go to production without being rebuilt
+
+- How to evaluate SaaS contracts differently when building becomes cheaper than buying
+
+- Real-world use cases for rapidly prototyping and building internal tools
+
+- The MCP integration that connects AI coding tools directly to your database
+
+Join us live to participate in the Q&A. Can't make it? We'll send you a link to the recording.
diff --git a/apps/www/app/blog/[slug]/page.tsx b/apps/www/app/blog/[slug]/page.tsx
index 3938a9cc70d6e..c00340d66eb3b 100644
--- a/apps/www/app/blog/[slug]/page.tsx
+++ b/apps/www/app/blog/[slug]/page.tsx
@@ -1,12 +1,12 @@
import { getAllCMSPostSlugs, getCMSPostBySlug } from 'lib/get-cms-posts'
import { getAllPostSlugs, getPostdata, getSortedPosts } from 'lib/posts'
+import type { Metadata } from 'next'
import { draftMode } from 'next/headers'
+import type { Blog, BlogData, PostReturnType } from 'types/post'
+
+import BlogPostClient from './BlogPostClient'
import { processCMSContent } from '~/lib/cms/processCMSContent'
import { CMS_SITE_ORIGIN, SITE_ORIGIN } from '~/lib/constants'
-import BlogPostClient from './BlogPostClient'
-
-import type { Metadata } from 'next'
-import type { Blog, BlogData, PostReturnType } from 'types/post'
export const revalidate = 30
@@ -93,8 +93,12 @@ export async function generateMetadata({ params }: { params: Promise }):
const postContent = await getPostdata(slug, '_blog')
const parsedContent = matter(postContent) as unknown as MatterReturn
const blogPost = parsedContent.data
- const imageField = blogPost.imgSocial ? blogPost.imgSocial : blogPost.imgThumb
- const metaImageUrl = imageField ? `/images/blog/${imageField}` : undefined
+ const blogImage = blogPost.imgThumb || blogPost.imgSocial
+ const metaImageUrl = blogImage
+ ? blogImage.startsWith('http')
+ ? blogImage
+ : `${CMS_SITE_ORIGIN.replace('/api-v2', '')}${blogImage}`
+ : undefined
return {
title: blogPost.title,
diff --git a/apps/www/components/Blog/BlogGridItem.tsx b/apps/www/components/Blog/BlogGridItem.tsx
index f119b9dee3342..ddaa0bddc4ec3 100644
--- a/apps/www/components/Blog/BlogGridItem.tsx
+++ b/apps/www/components/Blog/BlogGridItem.tsx
@@ -2,6 +2,7 @@ import dayjs from 'dayjs'
import authors from 'lib/authors.json'
import Image from 'next/image'
import Link from 'next/link'
+
import type Author from '~/types/author'
import type PostTypes from '~/types/post'
@@ -23,17 +24,16 @@ const BlogGridItem = ({ post }: Props) => {
}
}
+ const resolveImagePath = (img: string | undefined): string | null => {
+ if (!img) return null
+ return img.startsWith('/') || img.startsWith('http') ? img : `/images/blog/${img}`
+ }
+
const imageUrl = post.isCMS
- ? post.imgThumb
- ? post.imgThumb
- : post.imgSocial
- ? post.imgSocial
- : '/images/blog/blog-placeholder.png'
- : post.imgThumb
- ? `/images/blog/${post.imgThumb}`
- : post.imgSocial
- ? `/images/blog/${post.imgSocial}`
- : '/images/blog/blog-placeholder.png'
+ ? post.imgThumb || post.imgSocial || '/images/blog/blog-placeholder.png'
+ : resolveImagePath(post.imgThumb) ||
+ resolveImagePath(post.imgSocial) ||
+ '/images/blog/blog-placeholder.png'
return (
{
>
-
+
import('components/Blog/ShareArticleActions'))
const CTABanner = dynamic(() => import('components/CTABanner'))
@@ -162,7 +160,9 @@ const BlogPostRenderer = ({
const imageUrl = isCMS
? blogMetaData.imgThumb ?? ''
: blogMetaData.imgThumb
- ? `/images/blog/${blogMetaData.imgThumb}`
+ ? blogMetaData.imgThumb.startsWith('/') || blogMetaData.imgThumb.startsWith('http')
+ ? blogMetaData.imgThumb
+ : `/images/blog/${blogMetaData.imgThumb}`
: ''
return (
@@ -264,7 +264,7 @@ const BlogPostRenderer = ({
/>
) : (
blogMetaData.imgThumb && (
-
+
{
+ if (!img) return null
+ return img.startsWith('/') || img.startsWith('http') ? img : `/images/blog/${img}`
+ }
+
const imageUrl = blog.isCMS
- ? blog.imgThumb
- ? blog.imgThumb
- : blog.imgSocial
- ? blog.imgSocial
- : '/images/blog/blog-placeholder.png'
- : blog.imgThumb
- ? `/images/blog/${blog.imgThumb}`
- : blog.imgSocial
- ? `/images/blog/${blog.imgSocial}`
- : '/images/blog/blog-placeholder.png'
+ ? blog.imgThumb || blog.imgSocial || '/images/blog/blog-placeholder.png'
+ : resolveImagePath(blog.imgThumb) ||
+ resolveImagePath(blog.imgSocial) ||
+ '/images/blog/blog-placeholder.png'
return (
diff --git a/apps/www/components/Events/EventGridItem.tsx b/apps/www/components/Events/EventGridItem.tsx
index 319d916602d75..a16bee905ea57 100644
--- a/apps/www/components/Events/EventGridItem.tsx
+++ b/apps/www/components/Events/EventGridItem.tsx
@@ -40,7 +40,13 @@ const EventGridItem = ({ event }: Props) => {
fill
sizes="100%"
quality={100}
- src={event.type === 'casestudy' ? event.thumb : `/images/blog/${event.thumb}`}
+ src={
+ event.type === 'casestudy' ||
+ event.thumb.startsWith('/') ||
+ event.thumb.startsWith('http')
+ ? event.thumb
+ : `/images/blog/${event.thumb}`
+ }
className="scale-100 object-cover overflow-hidden"
alt={`${event.title} thumbnail`}
/>
diff --git a/apps/www/components/Solutions/PostGrid.tsx b/apps/www/components/Solutions/PostGrid.tsx
index 8602788850001..480576e00f10a 100644
--- a/apps/www/components/Solutions/PostGrid.tsx
+++ b/apps/www/components/Solutions/PostGrid.tsx
@@ -34,7 +34,11 @@ function PostGrid({ id, className, header, subheader, posts }: PostGridProps) {
{post.imgThumb && (
,
},
diff --git a/apps/www/layouts/comparison.tsx b/apps/www/layouts/comparison.tsx
index e8c54b55bb82c..a5a9949371bfb 100644
--- a/apps/www/layouts/comparison.tsx
+++ b/apps/www/layouts/comparison.tsx
@@ -76,13 +76,15 @@ const LayoutComparison = ({ components, props }: Props) => {
return cat
}),
},
- images: [
- {
- url: `https://supabase.com${basePath}/images/blog/${
- props.blog.imgSocial ? props.blog.imgSocial : props.blog.imgThumb
- }`,
- },
- ],
+ images: (() => {
+ const img = props.blog.imgSocial || props.blog.imgThumb
+ if (!img) return []
+ const url =
+ img.startsWith('/') || img.startsWith('http')
+ ? img
+ : `https://supabase.com${basePath}/images/blog/${img}`
+ return [{ url }]
+ })(),
}}
/>
diff --git a/apps/www/lib/remotePatterns.js b/apps/www/lib/remotePatterns.js
index 8a60102b56ac0..02ff77d2b8348 100644
--- a/apps/www/lib/remotePatterns.js
+++ b/apps/www/lib/remotePatterns.js
@@ -153,6 +153,13 @@ module.exports = [
port: '',
pathname: '**',
},
+ // OG Edge Function
+ {
+ protocol: 'https',
+ hostname: 'zhfonblqamxferhoguzj.supabase.co',
+ port: '',
+ pathname: '/functions/v1/generate-og',
+ },
// Dynamically generated CMS patterns based on CMS_SITE_ORIGIN
...generateCMSRemotePatterns(),
]
diff --git a/apps/www/public/rss.xml b/apps/www/public/rss.xml
index 911f9ea2a2e23..66e497ec4adcf 100644
--- a/apps/www/public/rss.xml
+++ b/apps/www/public/rss.xml
@@ -20,6 +20,13 @@
We are releasing Agent Skills for Postgres Best Practices to help AI coding agents write high quality, correct Postgres code.
Wed, 21 Jan 2026 00:00:00 -0700
+-
+ https://supabase.com/blog/read-replicas-vs-bigger-compute
+ When to use Read Replicas vs. bigger compute
+ https://supabase.com/blog/read-replicas-vs-bigger-compute
+ A practical guide to diagnosing database slowdowns and choosing between vertical scaling and Read Replicas based on your workload, budget, and performance bottlenecks.
+ Thu, 15 Jan 2026 00:00:00 -0700
+
-
https://supabase.com/blog/introducing-trae-solo-integration
Introducing TRAE SOLO integration with Supabase
@@ -2463,6 +2470,13 @@
Five days of Supabase.
Thu, 25 Mar 2021 00:00:00 -0700
+-
+ https://supabase.com/blog/postgresql-views
+ Postgres Views
+ https://supabase.com/blog/postgresql-views
+ Creating and using a view in PostgreSQL.
+ Wed, 18 Nov 2020 00:00:00 -0700
+
-
https://supabase.com/blog/continuous-postgresql-backup-walg
Continuous PostgreSQL Backups using WAL-G
@@ -2498,13 +2512,6 @@
We're releasing a new version of our Supabase client with some awesome new improvements.
Fri, 30 Oct 2020 00:00:00 -0700
--
- https://supabase.com/blog/postgresql-views
- Postgres Views
- https://supabase.com/blog/postgresql-views
- Creating and using a view in PostgreSQL.
- Wed, 18 Nov 2020 00:00:00 -0700
-
-
https://supabase.com/blog/case-study-monitoro
Monitoro Built a Web Crawler Handling Millions of API Requests