Context: The Marketing Platform Journey Continues
Remember that September post where I said I was speed-running a 10-agent marketing platform while taking naps? Four weeks in, I had 3 working agents and was targeting October/November for alpha launch.
Well, it’s late October now. Time for a progress update.
The Good News:
– The platform finally has a name: STRAŦUM (Intelligence Over Execution)
– Complete brand guidelines and design system (turns out branding takes longer than coding)
– 9 of 10 agents built and integrated
– Multi-tenant architecture actually working
– Final stage: preparing for pre-alpha, invitation-only testing
The Reality Check:
I got sick for 10 days after being on leave for 8 days (I know life happens.) That put everything on pause and shifted my timeline. So much for that October launch, right?
But here’s the thing about solo development: you control the pace. No pressure to ship when you’re not ready. No investors breathing down your neck. Just… build it right.
This blog post is about one of those “build it right” moments that turned into a 6-hour debugging marathon. Because right when I was getting back to full speed after being sick, my deployment platform decided to have other plans.
Table of Contents
- When AWS Goes Down, Engineers Get Creative (and Sometimes Regret It)
- The Switch That Seemed Too Easy
- The Problem: HTTP Calls From an HTTPS Page
- First Debugging Attempt: The Environment Variable Hunt
- Second Attempt: The Great Reimport
- The Plot Twist: Missing Files in Git
- Third Attempt: Build-Time Validation
- The Lightbulb Moment: Local Files Were Being Deployed
- The Fix: One Line
- But Wait, There’s More: The Cache Conspiracy
- The Return to Vercel: Sometimes Boring is Better
- What I Learned (The Hard Way)
- The Code That Saved Me
- The Real Lesson: Debugging is Detective Work
- Was It Worth It?
- The Git Log Tells the Tale
- For Other Engineers Fighting Mixed Content Errors
- The Migration Checklist I Wish I Had
- Still Coding, Still Learning, Still Breaking Things (Sometimes)
When AWS Goes Down, Engineers Get Creative (and Sometimes Regret It)
Here’s something they don’t teach you in coding bootcamps: sometimes your deployment platform just… disappears. Not because of something you did wrong, but because AWS decided to have a widespread outage affecting Vercel deployments globally.
That’s how my Monday started. My production site was down, Vercel was showing errors, and I had exactly one thought: “I need a backup. Fast.”
Enter Cloudflare Pages. I’d heard good things. Great CDN, automatic deployments, simple setup. What could possibly go wrong?
Narrator: Everything. Everything could go wrong.
The Switch That Seemed Too Easy
The migration to Cloudflare Pages was surprisingly smooth. Connected my GitHub repo, set environment variables in the dashboard, pushed to main. Three minutes later: deployed.
“Wow,” I thought. “This is almost too easy.”
Then I opened the production site.
```
Mixed Content: The page at 'https://my-site.com/...' was loaded over HTTPS,
but requested an insecure resource 'http://stratum-api.us-central1.run.app/...'
```
That sinking feeling when you realize your celebration was premature? Yeah, that.
The Problem: HTTP Calls From an HTTPS Page
My React app was making HTTP requests to my backend API while the page itself was loaded over HTTPS. Browsers (rightfully) block this as a security risk. Mixed Content errors. Every single API call was failing.
“But wait,” I told myself, “I have `ensureHttpsInProduction()` in my code! It’s supposed to convert HTTP to HTTPS automatically!”
I checked the deployed bundle. The function was there. The logic was correct. The browser console showed the conversion happening. So why were HTTP requests still getting through?
First Debugging Attempt: The Environment Variable Hunt
Maybe the environment variables in Cloudflare weren’t being picked up?
```bash
# Checked Cloudflare Dashboard
VITE_API_URL=https://stratum-api.us-central1.run.app ✓
VITE_SUPABASE_URL=https://your-project.supabase.co ✓
```
All HTTPS. All correct.
I triggered a rebuild. Waited. Deployed. Opened the site.
Same error. Still HTTP requests.
Second Attempt: The Great Reimport
Maybe files weren’t using the centralized `API_BASE_URL`?
I spent the next hour updating 24 files to import from `@/lib/api` instead of directly using `import.meta.env.VITE_API_URL`. Every component that made API calls got the treatment.
```typescript
// Before
const response = await fetch(`${import.meta.env.VITE_API_URL}/api/v1/...`);
// After
import { API_BASE_URL } from '@/lib/api';
const response = await fetch(`${API_BASE_URL}/api/v1/...`);
```
Pushed. Deployed. Waited.
Still broken.
At this point, I’m starting to question my life choices.
The Plot Twist: Missing Files in Git
But wait, it gets worse.
While investigating why my HTTPS enforcement wasn’t working, I discovered something terrifying. The `api.ts` file containing `ensureHttpsInProduction()` wasn’t even in my Git repo.
Neither was `authService.ts`. Or `csvSanitizer.ts`. Three critical frontend files, just… missing.
How? The `.gitignore` file had this:
```
# Python stuff
lib/
build/
dist/
```
Seems reasonable for Python, right? Except my frontend utilities lived in `apps/web/src/lib/`. The broad `lib/` pattern was accidentally ignoring my entire frontend lib directory!
This meant:
1. Cloudflare was building from the repo (missing these files)
2. My local development had these files (working fine locally)
3. I had NO IDEA they weren’t being tracked
The fix:
```diff
# .gitignore - Before
-lib/
# .gitignore - After
+apps/api/lib/ # Python-specific
+!apps/web/src/lib/ # Explicitly include frontend lib
```
Added the missing files, committed, pushed. Now Cloudflare had the HTTPS enforcement code!
Except… the HTTP errors continued.
Third Attempt: Build-Time Validation
At this point, I’m questioning everything. “You know what?” I thought. “If this keeps happening, I need to PREVENT it from ever reaching production.”
I wrote a Vite plugin that fails the build if it detects HTTP URLs in production:
```typescript
function validateProductionUrls(mode: string) {
if (mode !== 'production') return;
const apiUrl = process.env.VITE_API_URL || '';
if (apiUrl && apiUrl.trim().startsWith('http://')) {
if (!apiUrl.includes('localhost') && !apiUrl.includes('127.0.0.1')) {
throw new Error(
`❌ HTTPS ENFORCEMENT FAILED
Environment Variable: VITE_API_URL
Current Value: ${apiUrl}
This will cause Mixed Content errors in production!`
);
}
}
}
```
Genius, right? Now it’s IMPOSSIBLE to deploy with HTTP URLs.
Deployed again. Build passed (environment variables were HTTPS). Site loaded.
Same. Damn. Error.
The Lightbulb Moment: Local Files Were Being Deployed
Late (very late) in the evening, I had a realization.
I checked the deployed JavaScript bundle again. Really looked at it this time. The URL inside was:
```javascript
"http://stratum-api.us-central1.run.app"
```
But my Cloudflare environment variables were HTTPS. So where was this HTTP URL coming from?
Then it hit me. My local `.env.production` file.
```bash
# apps/web/.env.production (LOCAL FILE)
VITE_API_URL=http://stratum-api.us-central1.run.app
```
Cloudflare Pages was deploying my local environment file instead of using the dashboard variables!
I checked `.cloudflare-pages-ignore`:
```
# Environment files
.env
.env.local
.env.development
.env.test
# .env.production ← MISSING!
```
Face. Palm.
The Fix: One Line
```diff
# apps/web/.cloudflare-pages-ignore
.env
.env.local
.env.development
.env.test
+.env.production
```
Deployed. Waited.
Different error this time! Progress!
```
Access to fetch at 'https://stratum-api.us-central1.run.app/...'
from origin 'https://preview-xyz.stratum-marketing-suite.pages.dev'
has been blocked by CORS policy
```
CORS errors! Beautiful, beautiful CORS errors! That meant HTTPS was working!
But Wait, There’s More: The Cache Conspiracy
Fixed CORS. Deployed again. Opened my custom domain.
HTTP errors again.
What?!
Turns out, Cloudflare’s CDN was aggressively caching the old bundle. The new deployment (with HTTPS) was live at the preview URL, but my custom domain was serving cached content with HTTP URLs.
Cloudflare’s cache purging requires:
1. Finding the right zone settings (not in the Pages dashboard)
2. Navigating through domain settings (not obvious)
3. Manually purging cache (for every deployment)
After hours of debugging HTTP/HTTPS issues, I made a decision.
The Return to Vercel: Sometimes Boring is Better
AWS was back up. Vercel was working again.
I migrated everything back to Vercel. Why?
1. Automatic cache invalidation – No manual purging needed
2. Simpler environment variable handling – What you set is what you get
3. Faster debugging – Less infrastructure to reason about
4. Battle-tested – I know its quirks
The Vercel deployment took 3 minutes. No HTTP errors. No cache issues. Just… worked.
What I Learned (The Hard Way)
1. Always Ignore .env.production in Deployment Platforms
```
# .vercelignore
# .cloudflare-pages-ignore
# .netlify-ignore
.env
.env.local
.env.development
.env.test
.env.production ← DON'T FORGET THIS
```
2. Broad .gitignore Patterns Are Dangerous in Monorepos
```diff
# ❌ Bad - Ignores frontend AND backend lib folders
-lib/
-build/
-dist/
# ✅ Good - Specific to each context
+apps/api/lib/ # Python-specific
+apps/api/build/
+apps/api/dist/
+apps/web/dist/ # Vite output only
```
Always ask: “Could this pattern accidentally ignore something important?”
In a monorepo with multiple languages (Python + TypeScript), broad patterns meant for one ecosystem can accidentally ignore critical files in another.
3. Build-Time Validation is Still Worth It
Even though it didn’t catch the local file issue, the build-time validation prevents *future* misconfigurations:
```typescript
// vite.config.ts
export default defineConfig(({ mode }) => {
validateProductionUrls(mode);
return {
// ... config
};
});
```
4. Multi-Layer Defense Works
Our final architecture has THREE layers:
– Build-time: Fails build if HTTP URLs detected
– Runtime: Converts HTTP → HTTPS if page loaded over HTTPS
– Deployment: Excludes local .env files
5. Preview URLs are Your Friend
Always test on the preview URL first. If that works but your custom domain doesn’t, it’s usually caching.
6. Know Your Platform’s Quirks
– Vercel: Simple, auto-invalidates cache, environment variables “just work”
– Cloudflare Pages: Great CDN, but manual cache purging and more complex setup
The Code That Saved Me
Here’s the final `ensureHttpsInProduction()` function:
```typescript
function ensureHttpsInProduction(url: string): string {
// Only convert in browser context when site is loaded over HTTPS
if (typeof window !== 'undefined' && window.location.protocol === 'https:') {
// Don't convert localhost/127.0.0.1 URLs (local development)
if (url.startsWith('http://') &&
!url.includes('localhost') &&
!url.includes('127.0.0.1')) {
console.warn('[API] Converting HTTP to HTTPS:', url);
return url.replace('http://', 'https://');
}
}
return url;
}
```
And the build-time validation in `vite.config.ts`:
```typescript
function validateProductionUrls(mode: string) {
if (mode !== 'production') return;
const apiUrl = process.env.VITE_API_URL || '';
// Check for HTTP (should be HTTPS)
if (apiUrl && apiUrl.trim().startsWith('http://')) {
if (!apiUrl.includes('localhost') && !apiUrl.includes('127.0.0.1')) {
throw new Error(`
❌ HTTPS ENFORCEMENT FAILED
Environment Variable: VITE_API_URL
Current Value: ${apiUrl}
Mixed Content Error Prevention:
Browsers block HTTP requests from HTTPS pages.
Fix: Update environment variables to use HTTPS URLs.
`);
}
}
}
```
The Real Lesson: Debugging is Detective Work
This wasn’t a coding problem. It was a configuration archaeology expedition.
The real bugs were:
1. ✅ Broad `.gitignore` patterns ignoring critical frontend files
2. ✅ Missing `.env.production` in `.cloudflare-pages-ignore`
3. ✅ Aggressive CDN caching masking the fix
4. ✅ Assuming environment variable precedence
The technical solution was one line in a `.ignore` file.
The debugging? That took 6 hours, 14 deployments, and way too much coffee.
Was It Worth It?
Absolutely. Here’s what I gained:
1. Deep understanding of Mixed Content security policies
2. Build-time validation that prevents future issues
3. Multi-layer HTTPS enforcement that’s platform-agnostic
4. Real appreciation for Vercel’s simplicity
And most importantly: a great debugging story to share. 😛
The Git Log Tells the Tale
```bash
2033b9a fix: add .env.production to .vercelignore
3555390 docs: migrate deployment documentation from Cloudflare Pages to Vercel
a4edb09 chore: force clean Cloudflare Pages rebuild
590c271 fix: enhance ensureHttpsInProduction logging
15d69c6 chore: force rebuild of UserProfile bundle
a1c345f fix: add .env.production to cloudflare-pages-ignore ← THE .ENV FIX
635d80d docs: update documentation for Cloudflare Pages migration
edfa611 feat: add build-time HTTPS enforcement
802c1e9 fix: enforce HTTPS for all API calls across frontend
26f6b87 refactor: comprehensive .gitignore audit and cleanup
3758a9c fix: unignore frontend lib directory and add missing files ← THE GITIGNORE FIX
```
Each commit represents an hour of debugging in the old world but with Claude Code, thank god, my speed was faster. A hypothesis tested. A lesson learned.
For Other Engineers Fighting Mixed Content Errors
If you’re reading this because you’re debugging the same issue, here’s your checklist:
1. Check Your Environment Variables:
```bash
# Print actual values being used
console.log('API URL:', import.meta.env.VITE_API_URL);
```
2. Check Your Deployed Bundle:
```bash
# Download and search the JavaScript bundle
curl https://your-site.com/assets/index-ABC123.js | grep "http://"
```
3. Check Your Ignore Files:
```bash
# Make sure .env.production is excluded
cat .vercelignore
cat .cloudflare-pages-ignore
cat .netlify-ignore
```
4. Check Your .gitignore (Monorepos):
```bash
# Make sure critical files aren't being ignored
git ls-files apps/web/src/lib/ # Should show api.ts, etc.
# If empty, check for broad patterns
grep "^lib/" .gitignore # ❌ Too broad
grep "^apps/api/lib/" .gitignore # ✅ Specific
```
5. Check Your Cache:
```bash
# Test on preview URL first
# If preview works but production doesn't = cache issue
```
6. Add Build-Time Validation:
```typescript
// Prevent it from ever happening again
if (mode === 'production' && url.startsWith('http://')) {
throw new Error('HTTPS required in production!');
}
```
The Migration Checklist I Wish I Had
When switching deployment platforms:
– [ ] List ALL environment variables from old platform
– [ ] Set up environment variables in new platform FIRST
– [ ] Add ALL .env files to .ignore (including .env.production)
– [ ] Verify .gitignore isn’t ignoring critical files (run `git ls-files` to check)
– [ ] Test on preview URL before custom domain
– [ ] Check deployed bundle for HTTP URLs
– [ ] Verify CORS settings if backend is separate
– [ ] Document platform-specific quirks
Still Coding, Still Learning, Still Breaking Things (Sometimes)
Six hours of debugging for a one-line fix. That’s software engineering in a nutshell.
The code we ship is important, sure. But the debugging skills we develop? Those are what make us better engineers.
Next time my deployment platform goes down (and there will be a next time), I’ll be ready. I have:
– ✅ Platform-agnostic HTTPS enforcement
– ✅ Build-time validation
– ✅ Better understanding of environment variable precedence
– ✅ A backup deployment strategy
And hopefully, this blog post will save someone else a few of those 6 hours. 🙂