Inbox & Telegram Triage
Classify, label, route, and draft responses. Escalate uncertain cases to a human on Telegram with one tap to approve or edit.
Private beta · Q4 2026
Vega turns the repetitive work inside your company — inbox triage, Telegram support, data reconciliation, document extraction — into autonomous Python agents, deployed on your AWS account, with full audit trails and cost caps.
// The problem
Ten to thirty percent of payroll in most operations teams goes to tasks that are deterministic, rule-driven, and infuriatingly repetitive: classifying a ticket, copying a value between two SaaS tools, answering the same Telegram question for the 400th time.
General-purpose LLMs are great at these tasks — in isolation. Turning them into something your CFO can trust in production is a completely different engineering problem. That is what Vega is built to solve.
// The solution
Define a workflow in plain language and one YAML file. Vega compiles it into a Python agent, wires it to your tools (Telegram, IMAP, Postgres, HubSpot, Stripe, Airtable…), runs it on your AWS account inside a hardened Lambda or Fargate service, and gives you one dashboard to see every run, every cost, every escalation.
When it is not sure, it asks — on Telegram, by default. No black boxes.
// What ships
Classify, label, route, and draft responses. Escalate uncertain cases to a human on Telegram with one tap to approve or edit.
Invoices, contracts, shipping docs — turned into typed JSON records and pushed straight to your ERP or data warehouse.
Nightly agents that compare Stripe, HubSpot, and your database, open a ticket for every mismatch, and resolve the simple ones automatically.
Every run, every prompt, every token, every dollar — visible in one dashboard. Hard caps stop a runaway agent before it costs you anything scary.
// How it works
Write the task in plain English. Upload a few examples of inputs and expected outputs. Commit a short YAML — or let Vega draft it.
Telegram, IMAP, Postgres, Stripe, HubSpot, Airtable, S3, custom HTTP — check the boxes you need. Secrets stay in your AWS Secrets Manager.
A Python agent is generated, unit-tested against your examples, deployed as a Lambda or Fargate service in your own account, and wired to CloudWatch.
Watch every run in the dashboard. Approve or edit uncertain cases from Telegram. Tune thresholds. Ship new versions without downtime.
// Technical highlights
Vega deploys into your AWS account via a read-only Terraform module. Your data never leaves your VPC, and you can revoke access at any time.
Per-agent budgets, per-run token caps, and human-in-the-loop gates for any action classified as irreversible. Every decision is logged.
Today we default to Claude via AWS Bedrock. The runtime is model-agnostic — you can point any workflow at a self-hosted model when that makes sense.
// AWS Bedrock
Every model call in every customer agent goes through AWS Bedrock. That is not a convenience — it is a deliberate architectural choice that unlocks four things we could not ship otherwise.
Every InvokeModel call is authenticated by the Lambda execution role — no long-lived API keys, no secrets rotation, no leaked tokens on GitHub. This alone unblocks most enterprise pilots.
Bedrock in eu-central-1 lets us contractually guarantee that customer prompts and completions never leave European infrastructure — a hard requirement in every DPA we sign.
Managed PII redaction, content moderation, and denied-topic filters — configured per agent from Terraform. Would take weeks to build; with Bedrock it is a resource block.
Managed RAG over S3 and managed tool-use orchestration without operating our own vector store or agent loop — we focus on the workflow, AWS operates the infrastructure.
// Model choice
Vega routes each workflow to the right Claude tier on Bedrock: Claude Opus 4.7 for deep multi-step reasoning and agentic coding, Claude Sonnet 4.6 as the balanced default for most workflows, and Claude Haiku 4.5 for high-volume classification and cheap pre-filtering. Swapping tiers is a single line in the agent config.
Bedrock’s Converse API exposes every other major foundation-model provider under the same interface, which lets us match the workload to the model without rewriting code:
// Cost & scale
Bedrock's on-demand pricing is ideal for pilots; Provisioned Throughput kicks in once a workflow stabilises and the economics flip. We forecast and cap spend per-agent from day one, and every token is attributable in CloudWatch and CloudTrail.
Our internal target: under $0.02 per successful agent run at beta, trending to sub-cent at production scale.
Ten design partners, fixed discount for life, direct Slack line to engineering.