Google Cloud Startup Credits

Q-Star AI Google Cloud Use Case

Q-Star AI already operates multiple AI generation workflows across video production, app generation, agent orchestration, and automated media production. We are preparing to consolidate these workloads on Google Cloud.

Core Positioning

Q-Star AI is building a multi-agent AI creation infrastructure for video, apps, and automated creative workflows. AI is not an add-on feature โ€” it is the core execution layer of every product we ship.

We are currently in active discussions with institutional investors and startup ecosystem partners. We are preparing the Google Cloud Startup Credits application package in parallel so that the infrastructure plan, AI workload forecast, and evidence materials are ready once the investment or partner channel is confirmed.

Current Operating Workloads

These are not planned features. They are running in production today.

๐ŸŽฌ

Drama Engine โ€” Video Pipeline

End-to-end AI video production: brief โ†’ script โ†’ TTS โ†’ render โ†’ subtitle burn โ†’ CDN delivery. Running in production with 435+ MB of generated video assets across 22+ completed productions.

โ— Live 22+ generated videos ยท render plans ยท scripts ยท summaries
๐Ÿค–

AgentOS โ€” Multi-Agent Orchestration

Codex-led multi-agent collaboration system using OpenClaw, Hermes, Claude, Q-Star agents, shared queues, R2/Qiniu handoffs, and independent verification. Runtime evidence for OpenClaw, Hermes, and Claude is on server 113.

โ— Live Daily autonomous workflows ยท persistent memory ยท Telegram interface
๐Ÿ“ฑ

AppForge โ€” App Generation

Natural language to production-ready iOS + Android apps. APK packaging and store asset generation included. 7 APK artifacts in storage (136 MB total).

โ— Operational 7 APK artifacts ยท screenshot generation ยท store packaging
๐ŸŽต

ClipAI + AI Music โ€” Consumer Layer

AI Music live on WeChat Mini-Program. ClipAI iOS + Android app in App Store review. Consumer-facing interfaces to the same infrastructure.

โ— AI Music Live โ— ClipAI In Review

Why Google Cloud

Our current infrastructure handles early-stage workloads. Google Cloud is the target for production-scale AI generation.

Vertex AI / Gemini as Core AI Layer

Gemini is planned as the core model in our routing layer. Vertex AI provides the managed infrastructure for planning, routing, text, code, and multimodal generation at scale โ€” without managing model serving ourselves.

Scalable Generation Workers

Video rendering, app generation, and agent execution are compute-intensive. Cloud Run and GKE provide the elastic scaling we need to handle burst workloads without over-provisioning.

Centralized Storage and Delivery

Cloud Storage replaces fragmented R2 + Qiniu setup with a unified, globally accessible artifact store โ€” generated videos, APKs, images, audio, and prompt archives in one place.

Production-Grade Observability

Cloud Monitoring and BigQuery give us the reliability dashboards, cost forecasting, and workload analytics we need to operate at scale and demonstrate AI/ML usage to stakeholders.

AI-First Workload Profile

AI/ML workloads are expected to represent more than 50% of Q-Star AI's cloud usage.

WorkloadAI/ML RelevanceGoogle Cloud Target
Agent planning & orchestrationLLM task decomposition, routing, tool selectionVertex AI / Gemini API
Video script generationLLM narrative writing, scene planningVertex AI / Gemini API
TTS voice synthesisNeural TTS, 6 voice profilesCloud Run, GPU workers
Video rendering & assemblyffmpeg pipeline, subtitle burn, BGM mixGKE, GPU-enabled compute
App UI/code generationLLM code generation, screenshot-to-codeGemini, Cloud Run
APK build & packagingAutomated build pipeline, asset generationCloud Run, Cloud Storage
Model routing & failoverMulti-model orchestration, cost optimizationVertex AI, Monitoring
Usage analyticsWorkload forecasting, cost reportingBigQuery

Vertex AI / Gemini Usage Plan

Gemini is planned as the core model in our routing layer. Vertex AI is the target for managed, scalable AI inference.

๐Ÿง 

Planning & Reasoning

Gemini handles top-level agent planning, task decomposition, and multi-step reasoning across all four products.

โœ๏ธ

Text & Code Generation

Script writing for Drama Engine, UI/code generation for AppForge, and structured output generation for AgentOS.

๐Ÿ–ผ๏ธ

Multimodal Generation

Image understanding, screenshot-to-code conversion, and multimodal prompt processing for app and video generation.

๐Ÿ”€

Model Routing Layer

Vertex AI as primary inference target. External model routing for non-Google models (Claude, GPT, MiMo) where specialized capabilities are needed.

๐Ÿ“Š

Monitoring & Cost Control

Vertex AI usage metrics fed into BigQuery for workload forecasting, cost attribution, and AI/ML spend reporting.

๐Ÿค

Partner Model Credits

Claude via Vertex AI Model Garden / partner model credits, where available, as part of the $10K partner model credits included in the AI Tier.

6-Month and 12-Month Usage Projection

Projections are based on current internal workflows, deployed product demos, generated media artifacts, APK build artifacts, and planned early-access rollout.

MetricCurrent Baseline6-Month12-MonthEvidence Basis
Daily AI generations ~300 20,000 80,000 Workflow logs, generated assets in R2
Monthly token usage ~60M 2B 8B Model routing logs, current API usage
Video rendering minutes/mo 1,200 80,000 300,000 22+ render jobs, MP4 outputs in R2
Async jobs/day 1,000 120,000 500,000 Queue logs, render plans, task traces
Generated storage ~0.5 TB 12 TB 50 TB R2 assets: 435MB videos + 136MB APKs
GPU inference hours/mo 120 3,000 10,000 TTS, video render, media processing jobs

Migration Timeline

Phased migration from current infrastructure to Google Cloud, starting from credits activation.

PhaseTimelineScope
Phase 1 Before activation Prepare GCP billing account, project structure, IAM, and budget alerts using company domain email
Phase 2 Month 1 Integrate Vertex AI / Gemini API for agent planning and model routing workloads
Phase 3 Month 1โ€“2 Deploy API services and agent worker containers on Cloud Run; migrate lightweight orchestration
Phase 4 Month 2โ€“3 Move generated media (videos, APKs, images, audio) and prompt archives to Cloud Storage
Phase 5 Month 3โ€“4 Introduce Pub/Sub for async task dispatch, render queues, retries, and workload fan-out
Phase 6 Month 4โ€“6 Add BigQuery analytics dashboards and Cloud Monitoring for reliability, cost, and workload reporting
Phase 7 Month 6+ Scale containerized generation workers on GKE with GPU-enabled compute for video and media processing

Evidence

Real deployed systems, real generated outputs, real infrastructure. Full inventory on the Evidence page.

๐ŸŽฌ

Company Intro Video

Q-Star AI platform overview โ€” products, infrastructure, and Google Cloud migration plan.

Watch Video โ†’
๐Ÿ“

Product Demo Videos (V1โ€“V5)

All 5 demo videos are rendered and public. V4 AppForge and V5 Google Cloud Migration are now complete.

V3 AgentOS Ops Demo โ†’ V4 AppForge APK Demo โ†’ V5 GCP Migration Demo โ†’
๐Ÿ“ฑ

APK Artifacts

7 QClip APK builds (136 MB total) proving real app generation and packaging pipeline.

View Evidence โ†’
๐Ÿ—๏ธ

Live Demo Systems

drama.q-star.ink โ€” Drama Engine live demo. AI Music live on WeChat Mini-Program.

Drama Engine โ†’
๐Ÿ“„

Application Documents

Investor brief, architecture doc, usage projection, and evidence pack โ€” full application package.

View Documents โ†’
๐Ÿ“Š

R2 Evidence Inventory

Complete inventory of all evidence assets in Cloudflare R2, with public status and Google Cloud relevance.

View Inventory โ†’

Application Readiness Checklist

Google Cloud AI Tier application readiness status.

Ready to Review Our Application

Full evidence package, architecture documentation, and usage projections are available. Contact us for the complete application materials.

Contact Us View Evidence