Project THE X (the "Platform") is a Meta Ads media-buying dashboard. This Policy describes what data the Platform collects, how it is used, and the choices you have. It applies alongside the Terms of Service; by using the Platform you accept both.
1 — Who runs the Platform
Project THE X is operated by Omar Elsabbahi Saber (the "Operator"). The Operator is the data controller for the platform-level metadata described below. The agency administrator who invited you is the data controller for the workspace-level content stored inside their tenant. Every third-party service named in §3 is an independent data controller or processor of the data you elect to send to it through its respective integration.
Independent creation & trademark notice. Project THE X was independently designed and developed by the Operator. It is not affiliated with, sponsored by, or endorsed by any other business, product or service. Third-party trademarks listed in §3 (Meta, Anthropic, Google, Zoho, Telegram, Render and others) are the property of their respective owners and appear in this Policy only to identify the services the Platform integrates with — nominative fair use, no partnership or endorsement implied. A formal version of this notice lives in §16 of the Terms of Service, including the channel for rights-holders to raise a concern.
2 — Data we collect
From you, directly
- Account info — email address, display name, password (stored as a salted PBKDF2-SHA512 hash; the plaintext is never persisted), the role assigned at signup (Admin / Agency / Individual), and the agency you belong to (or the personal namespace for standalone Individuals).
- Workspace content — clients, projects, campaigns, creative assets, reports, media plans, lead feedback, team chat messages, QA reviews and any other artefacts you create inside the dashboard.
- Operational settings — alert thresholds, integration credentials (see §3), branding (logo, colours), notification preferences, and the Telegram chat IDs you've registered.
From third-party services you authorise
Each integration in §3 collects a defined slice of data on your behalf. We only request the scopes documented for each provider; you can disconnect any integration from Settings → Integrations and the stored credentials are dropped immediately.
Automatically
- Server-side logs — request paths, status codes, exception traces and Graph-API request/response shells (with access tokens redacted). Logs auto-rotate every day at midnight; only the current day's logs are retained.
- Local backups — every 3 hours the Platform creates a tar.gz of
data/for disaster recovery. Backups stay on the same machine; if the operator has configuredBACKUP_REMOTE_*env vars (S3-compatible object store), copies are uploaded there too — see §3. - AI audit trail — for every AI job dispatched to your companion we record a row in the
ai_jobstable (job id, label, kind, model, status, error, last 400 chars of stdout). No prompt content or full response is stored. - Companion telemetry — the local companion daemon on your machine reports its hostname, OS platform, and version to the dashboard over its outbound WebSocket. No file paths, no environment variables, no data outside the Platform's own scratch dir.
3 — Third-party platforms we connect to
The Platform integrates with the services below. Each is an independent third party with its own privacy policy; the Operator is not responsible for how they process the data you send them. Click Disconnect on any integration to drop the stored credentials immediately.
OAuth access token, your Meta user ID, display name, and the timestamp of connection. Used to list ad accounts, Pages, Instagram business accounts, audiences and lead forms; read insights (spend, impressions, clicks, leads, CPL, CTR, CPC, CPM, frequency, reach); create campaigns / ad sets / ads / creatives / instant lead forms you initiate; and read leads from your own forms when Lead Feedback is enabled. We never read your personal profile, friends, posts, photos or messages.
Meta Privacy Policy ↗Powers the research / planning / vision / copywriting sub-agents. Claude runs on your own machine via the local Companion daemon using your claude login credentials — your Anthropic account is billed, and the Operator has no Claude API key on the server. Prompts may contain project names, ad-copy drafts, and audience descriptions the Platform constructed from your workspace. Anthropic's Commercial Terms apply; per those terms, your data is not used to train any Anthropic model.
The Companion's claude CLI runs these tools on your machine when sub-agents request them. WebSearch queries (e.g. for project research) go to Anthropic's search infrastructure; WebFetch pulls publicly-accessible URLs; Bash and Read operate inside the scratch directory the Platform sets up. None of this traffic is proxied through the dashboard server.
OAuth access + refresh tokens you grant via the consent screen. Scope is read-only (spreadsheets.readonly, drive.readonly). Used solely to read lead rows from sheets you connect to a client. The OAuth app's client ID and secret are operator-level credentials configured via env vars and shared across tenants on this deployment.
Self-Client refresh token you paste into Settings → Integrations. Exchanged for short-lived access tokens that read Lead / Contact / User records (scope ZohoCRM.modules.leads.READ, .contacts.READ, .users.READ, .settings.READ). The OAuth app's client ID and secret are operator-level credentials. The Operator chose the Self-Client flow specifically to keep your CRM data inside your Zoho org — no OAuth redirect server is involved.
Your bot token, the chat IDs of teammates you've invited via /start <code>, the messages the bot sends (alerts, summaries, AI-chat responses), and the messages teammates send the bot (which are forwarded to the agent chat when the admin has enabled it). Telegram's policy governs delivery and storage on their servers.
The Platform is deployed on Render's infrastructure in the frankfurt region. Every request hits Render before reaching the Platform, so Render processes connection-level metadata (IP, user agent, timestamps, request size). Render also retains build/runtime logs the Operator can access. Persistent disk for SQLite, uploaded assets and backups lives on Render's managed storage.
When the operator sets BACKUP_REMOTE_ENDPOINT + key/secret, backup tarballs are uploaded to that bucket — typically Cloudflare R2, Backblaze B2 or AWS S3. The chosen provider's terms govern storage. The endpoint, region and prefix are operator-configured at deploy time.
The Platform's source code lives on GitHub. The Operator pushes commits and Render pulls them at deploy time. No end-user data flows to GitHub. If you read the README, browse the issue tracker, or download the companion tarball directly from the GitHub release page, GitHub's own logging applies.
GitHub Privacy ↗When you ask the agent to research a project — across any vertical (e.g. real estate, e-commerce, services, lead-gen, events) — the WebSearch tool may visit publicly-available reference pages (brand sites, marketplaces, listing pages, public databases) to extract images, pricing, offers and other project details. No data is sent to these third-party sites — only their publicly-available pages are read. Each site is independently operated by its respective owner; the platform has no relationship with any of them.
4 — How we use your data
- To render the dashboard you see and to power the campaign-creation flow.
- To call the Meta Graph API on your behalf when you launch, read or modify campaigns.
- To dispatch AI jobs to your local companion (which calls Claude on your machine) for research / planning / vision / copywriting / chat.
- To send Telegram alerts to teammates you've authorised when an alert rule fires.
- To produce logs, audit trails and backups for security and disaster recovery.
We do not sell your data. We do not share your data with advertisers. We do not use your data to train any AI model.
5 — Where data lives
Most persistent data lives in a single SQLite database file on Render's managed persistent disk, with the heavier per-tenant content (assets, backups) on the same disk:
data/projectthex.db— SQLite database (users, sessions, invites, clients, integrations, AI-job audit trail).data/users/<agency-id>/…— per-tenant filesystem content (assets, briefs, project research scratch, backups index).data/backups/— 3-hour rolling tar.gz archives (last 8 kept).logs/— current day's server logs (auto-rotated at midnight).- Optional: an S3-compatible bucket if
BACKUP_REMOTE_*env vars are set — see §3.
6 — Sharing data inside an agency
Members invited to the same agency share clients, knowledge base, assets, reports, media plans, alerts, QA reviews and team chat. The Individual role has access to a trimmed subset (campaigns, projects, insights, knowledge base, assets, reports, Meta + Google Sheets integrations, chat, overview). Individual accounts (signed up via an individual code) live in a personal namespace and cannot see any other tenant's data.
7 — Retention
- Logs — 24 hours (auto-rotated).
- Backups — the most recent 8 backups (≈24 hours of history).
- Sessions — 30 days from last sign-in.
- OAuth tokens — kept until you click
Disconnecton the integration, or until the provider expires the token, whichever comes first. - AI job audit rows — kept indefinitely for diagnostic purposes; only metadata (label / kind / status / 400-char stdout tail), no prompt content or full responses.
- Workspace content — kept until you delete it. Account deletion triggers a 30-day grace period before permanent removal, during which the data is recoverable on request.
8 — Your rights (GDPR / PDPL / CCPA / etc.)
Where applicable data-protection law (EU GDPR, UK GDPR, Egyptian PDPL no. 151/2020, California CCPA/CPRA, or similar) recognises the following rights, you may exercise them by contacting your agency administrator:
- Access to the personal data the Platform holds about you;
- Rectification of inaccurate data;
- Erasure ("right to be forgotten"), subject to §7's retention limits;
- Restriction of processing and objection to processing on the basis of legitimate interests;
- Data portability — every artefact lives in SQLite / JSON and can be exported.
The legal basis for processing is (a) contract — running the Platform you signed up for, (b) legitimate interests — operating, securing and improving the Platform, and (c) consent where you have authorised an integration. You can withdraw consent for any integration by clicking Disconnect.
Where Lead Ads collects personal data about your audience (lead-form respondents), the agency operating the campaign is the data controller of that data — not the Operator. The Platform acts as a processor passing the data through to your CRM / sheet on your instruction.
9 — Security
- Passwords hashed with PBKDF2-SHA512, 120 000 iterations, per-user 16-byte salt.
- Sessions are 32-byte random tokens delivered as
HttpOnlycookies withSameSite=Lax; the cookie is markedSecurewhen the request arrives over HTTPS. - Sensitive tokens (Meta OAuth, Zoho/Google refresh tokens) are encrypted at rest with AES-256-GCM using a server-side key (
ENCRYPTION_KEYenv var). - A Basic-Auth perimeter optionally gates the whole dashboard (
BASIC_AUTH_PASS) — including the WebSocket upgrade — leaving only the public legal pages, the companion install surface, and the OAuth callbacks unauthenticated. - Rate-limits guard signup, login, password and Meta Graph endpoints.
- Per-tenant boundaries are enforced server-side: every read/write resolves the user's
agency_idbefore touching SQLite or filesystem paths.
10 — Meta Platform Terms compliance
The Platform abides by Meta's Platform Terms and Developer Policies. We delete Meta-derived data when you click Disconnect, and we honour Meta's data-deletion callbacks when received. To request the removal of all Meta-sourced data the Platform holds about you, click Disconnect on the relevant client, or ask your agency administrator.
11 — AI processing & no model training
The Platform dispatches prompts to Claude via the Companion daemon on your machine. Prompts may include project names, ad-copy drafts, audience descriptions and instructions the Platform constructed from your workspace. Anthropic's Commercial Terms apply; under those terms, your data is not used to train any Anthropic model. The Operator never sees your raw Claude responses — only the 400-character stdout tail recorded in the audit trail described in §2.
12 — Children
The Platform is not directed at children under 16 and the Operator does not knowingly collect their personal data. If you believe a child has provided personal data to the Platform, ask your agency administrator to delete it.
13 — Security caveat & no warranty
The Operator implements reasonable technical and organisational measures (§9) but cannot guarantee impenetrability. You acknowledge that no internet-facing system is 100% secure and that you use the Platform at your own risk. The Operator's liability for any security incident is limited as set out in §11 of the Terms of Service.
Vibe-coded disclosure. The Platform is built by the Operator in collaboration with an AI coding assistant. The Operator has no formal technical or programming background. Architecture decisions, error handling, security hardening and performance tuning are directed by intuition, end-user feedback and the model's suggestions rather than by classical software-engineering training or third-party code audit. You should weigh this when entrusting the Platform with sensitive data — and you accept this as part of the as-is warranty disclaimer.
14 — Changes to this policy
The Operator may update this Policy from time to time. Material changes will be announced in-product or by email to your account address. Continued use of the Platform after a change takes effect constitutes acceptance.
15 — Contact
Day-to-day privacy questions can go to the agency administrator who issued your invite. For matters involving the Operator directly — security disclosures, formal data-subject requests under GDPR / PDPL / CCPA, takedown requests, or any other formal correspondence — email support@project-the-x.com. The Operator aims to respond within five business days; mandatory-response timelines required by applicable data-protection law are honoured ahead of that.
This Policy is written in English. Translations, if provided, are for convenience only — the English version controls in case of conflict.
← Back to the dashboard