Technical blog
How this website keeps itself current
cobry.ai is built around one practical problem: Google ships Workspace and Gemini Enterprise AI features faster than a static feature matrix can keep up. The site is a Next.js app, but the interesting part is the update loop behind it.
The public site looks like a simple catalogue. Underneath, it has a small content system that watches upstream Google sources, stores incoming changes, asks agents to reason about them, and turns the useful findings into reviewable CMS proposals. The agents do the boring search and comparison work. A human still decides what ships.
The stack in one pass
The website runs on Next.js 16, React 19, Payload 3, PostgreSQL on Cloud SQL, optional Google Cloud Storage for media, and Cloud Run for hosting. The agent layer uses@google/adk with Vertex AI Gemini models. The operational tools around it are Payload Admin for review, Google Chat for proposal alerts, and PostHog for client and server-side analytics.
The architecture
There are four moving parts. Google publishes updates. Cron routes pull those updates into Payload. Agents inspect the queue and create proposals. Approved Payload records are merged into the public pages at request time.
Sources
Ingestion
Review loop
Publishing
The public site is deliberately boring
The frontend lives in the Next.js App Router under src/app/(frontend). It renders the feature index, product pages, and feature pages inside the same docs-style shell. Most of the page code is plain React: look up a product or feature, render the availability matrix, show sources, and link out to official Google docs.
There is also a checked-in baseline at src/lib/site-data.ts. That file gives the site a known-good catalogue of products, tiers, features, summaries, and fallback availability. Payload can override it, but the static catalogue means the website can still render useful pages while the CMS catches up.
Payload is the source of truth for changes
Payload stores the records that change over time: features, products, tiers, sources, update items, agent runs, and proposals. In production it uses PostgreSQL through the Payload Postgres adapter. Uploaded media can go to Google Cloud Storage when a bucket is configured.
The frontend reads Payload through src/lib/cms-data.ts. Feature pages first ask the CMS for a matching slug. If one exists, the page uses CMS copy, sources, uploaded videos, availability rows, pending proposal counts, and verification dates. If the CMS does not have the record yet, the page falls back to site-data.ts.
The self sustaining part is a proposal loop
The site does not let an agent publish directly. It lets agents create work for a reviewer. That keeps the catalogue moving without making the public site depend on unreviewed model output.
Sync feeds
Classify updates
Draft proposal
Human review
Apply change
Render site
The sync route pulls Workspace Updates and Gemini Enterprise release notes into update_items. Each item keeps a source URL, an external ID, a content snapshot, and a content hash. If the same source changes later, the hash changes and the item is opened again.
The discovery agent reads pending items, filters out irrelevant posts, compares the update against nearby features, and creates a proposal when there is a new feature or a meaningful change. The verification agent works from the other direction: it looks for stale feature records, checks public sources again, and proposes fixes when the stored page no longer matches the current docs.
The agent harness uses Google ADK and Vertex AI
The agent layer is TypeScript code using @google/adk. There are two main agents: discovery for new upstream announcements and verificationfor checking existing feature pages. Both are LlmAgent instances run through an InMemoryRunner, with transfer disabled and low temperature so the model stays inside the task.
Model access goes through Vertex AI. getVertexGeminiModel() builds an ADKGemini model with vertexai: true, usingGOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_LOCATION. Discovery defaults to gemini-2.5-flash unless DISCOVERY_AGENT_MODEL is set. Verification defaults to gemini-3-flash-preview, and Gemini 3 models run in the global Vertex location.
Trigger
Runner
Agent
Result
The shared runner is intentionally small. runAgentWithTimeout() creates a session, calls runner.runAsync(), forwards each ADK event to the caller, and aborts the run when it exceeds the configured timeout. Discovery gets up to four LLM calls. Verification gets up to twelve because it can fetch sources, search for replacement docs, and compare findings against stored fields.
Discovery sends the model a JSON payload containing the update item, today's date, known products, tiers, and the most relevant existing features. Its output has to pass the discovery Zod schema before anything becomes a proposal. Verification has tools:get_feature reads the Payload snapshot, fetch_url extracts readable source content and hashes it, web_search uses ADK Google Search grounding with a one-search budget, and compare_to_stored gives the agent a field-level diff target.
Review is the safety boundary
Proposals are normal Payload records. They include the proposed change, reasoning, confidence, citations, status, and the agent run that produced them. Admins review them in a custom Payload view. Approving a proposal calls applyProposal(), which updates the right collections through typed code paths.
That apply step is where the system becomes maintainable. New feature proposals can create feature records, link sources, add tier availability, and mark the related update item as processed. Field update proposals can update summaries, descriptions, availability notes, how-to links, feature type, value copy, and tier availability.
Google Chat is the review queue alarm
Proposal creation calls notifyProposalCreation(). IfGOOGLE_CHAT_WEBHOOK_URL is set, the app posts a Google Chat card to that webhook. Small batches send one card per proposal. Larger batches send a summary card with the first five proposals and a button back to /admin/proposals.
The Chat payload is intentionally lightweight: feature title, proposal type, confidence, changed field, truncated reasoning, and a Review queue button. The notification carries no state of its own. Payload remains the source of truth, and Chat gets reviewers back to the admin queue quickly.
Production is a small Cloud Run service
Cloud Build builds the app with Google buildpacks and deploys it to Cloud Run in europe-west1. The service attaches Cloud SQL, reads secrets from Secret Manager, and uses environment variables to tune the batch sizes for sync, discovery, and verification.
PostHog tracks both sides of the app. The browser sends analytics through the local /ingest proxy, while server events record agent starts, completions, failures, and proposal decisions. Google Chat can receive proposal workflow notifications when the webhook is configured.
What this does and does not automate
The self sustaining part is the maintenance loop, not the final editorial judgement. External schedulers still need to call the protected cron endpoints. Reviewers still approve or reject proposals. The local payload.db file is useful for authoring and reference work, but production does not depend on it.
That tradeoff is intentional. The system keeps watching Google, keeps the queue fresh, keeps stale pages visible to reviewers, and keeps the public site grounded in approved records.