

Jay Long
Software Engineer & Founder
Published January 12, 2024
Updated March 5, 2026
Late last night I sat down with the Cursor agent and pointed it at my company landing site. I've been doing a lot of research and practice with marketing stuff lately. Lead capture, SEO, sales funnels, studying the tools that digital marketers use and figuring out where, as an engineer, I might fit in. Marketing is a weak point of mine that I want to strengthen, and I think it's going to be absolutely necessary as engineering gets more automated.
So I basically told Cursor: scan the landing site, see what kind of improvements we can make. It helped me come up with a plan document and we just started busting out items on the list. I'm really happy with its ability to understand my style, my brand, and my objective with the website. Most of the improvements were centered around the blog, because that's where most of my pages are. We added meta tags, breadcrumbs, social links, a ton of page speed optimization. We added schema objects that I wasn't even aware of, like a full author object with details and tags about the author. And it built a categorization system. In less than an hour.
It's worth talking about the architecture of my blog because it's what makes all of this possible. All of my blog articles are pure Markdown. Not kind-of Markdown. Not a TypeScript file wrapping a Markdown string in a component. Pure Markdown. I could drop these files in a GitHub repo and they'd render formatted.
Early on, each article was its own TypeScript file that created an article component around a Markdown string. I kept synthesizing it down until I got to where all I have to do is add a Markdown file in the proper location and the system picks it up automatically. The React article component parses the Markdown tags and translates them into HTML. No manual wiring. No config file to update.
The whole point of this architecture is exactly what I'm doing right now. My workflow for publishing articles is to record a voice memo and just talk about something I'm interested in or excited about. Sometimes it's biographical, like war stories from the past. I record a monologue, and the voice memo app has a decent transcription feature that runs fast and is reasonably accurate. One tap to copy, paste it into an LLM chat.
I usually use Grok for the article generation step because I'm not trying to generate code or have high-level engineering discussions. Grok is maximally truth-seeking, so it's really good at things like articles.
I used to just talk to an LLM through voice directly, but that caused problems. The LLM wouldn't let me pause for a long time. It would either time out or interrupt me. I need to fumble a little bit. I need to work out my thoughts and push forward, and when I know there's an LLM that's going to interrupt me if I don't talk to it like a person, it doesn't give me a chance to get my thoughts out. That's actually worse than talking to a real person, because a person understands you're going to have those moments. As long as it's not a debate, most people give you the breathing room to wander around, experiment with new thoughts and ideas.
So the workflow now is: record the monologue, take the transcript, paste it into an LLM chat along with a prompt I reuse and evolve over time. The prompt tells the LLM to clean the transcript up. Transcription tools work through the text word by word, and they struggle with tech names. Software products and hacker tools use clever misspellings, and acronyms get confusing. When you give the full transcript to an LLM, it has the entire context, so it can infer that when I say something, I'm talking about a specific technology project. A real-time transcription tool is way more likely to get that wrong.
With that prompt and the transcript, I can usually one-shot an article that's viable. Sometimes I'll have a conversation with the LLM for a while and do a second transcription to reflect on some of the perspective the AI added. But mostly I one-shot it. It's a really fast pipeline to get my ideas out on the web.
Because all my articles are pure Markdown in the codebase, I can see a direct path to automating this entire publishing workflow. I was thinking about this while vibing with Cursor on the SEO improvements, and it clicked.
The pipeline would work like this. Step one: I record a voice memo. Step two: I copy the transcript into a GitHub issue ticket. That triggers an agent that has my prompt coded in. The agent sends the transcript to Grok (or whichever API is most convenient), gets back the generated article in Markdown, updates the ticket description with the article content, updates the ticket title with whatever the generated H1 header is, and leaves a comment with the original transcript for archival purposes.
Then another agent, probably Copilot, assigns itself the ticket, creates a branch, and uses the branch name slug to generate the Markdown filename. This is already how my system generates the URL slug for the blog article path. Copilot creates the Markdown file in the blog's Markdown folder, commits it to the branch, and opens a pull request. Opening a PR against main is already connected to Vercel, so it automatically runs the preview pipeline and gives me a link.
That's it. Five steps, two of them automated. I record a voice memo, paste it into GitHub, and a few minutes later I get an email from Vercel saying my preview is ready. I review the article, hit approve, and it deploys all the way to production. Every bit of this I can do on my phone.
There's some housekeeping that needs to happen daily or weekly. A lot of this stuff is in code, not in a database. Categories are hardcoded in files. So every once in a while, agents need to read through all my blog articles and ask themselves how to improve the categories, what posts to recategorize, what interlinking opportunities exist, what schema objects need updating. This is easy to script for agents. It's routine housekeeping.
I was very close to diving into a database-powered CMS. Not just for the sake of learning it so I can help other people who depend on those solutions, but because I actually thought it might be the most practical way for me to manage my own stuff. And then I had a quick exchange with the Cursor agent and we popped out so many features in under an hour. Social linking, categories, breadcrumbs, page speed improvements, meta schema objects. If I had to hand this off to some digital marketer right now, it would be a nightmare and we would hate each other. But because I see a direct path to automation flows where agentic AI handles the code editing, it actually lets me keep the engineering of my site simple.
Usually most marketers and bloggers are not engineers. And historically, they haven't had AI agents that can do pretty sophisticated coding when properly prompted. My unique experience, combined with how fast things are changing in technology right now, gives me the ability to do all these things that ordinary bloggers and marketers would absolutely need a CMS with a database for.
One way to think about what I've built: I've taken the complexity and outsourced it to AI models. When you think about blogging platforms, there are two factors that work against each other. The more simple you make your platform, the more it requires a developer to work on it. The more WYSIWYG you make it, the more computational and data overhead you have to build in. You need a database, content stored in fields, page builder features, toolkits, editors, previews. That's a lot of code and complexity in your codebase.
The simplest possible codebase for a blog is one where a developer codes all the articles in directly. Your codebase can be almost nothing. But you lose convenience because now you need a software engineer to make updates.
What happens when you design your system to need an engineer to deploy changes, but you architect it so that every step of the process is easy to prompt an LLM to do, and easy to string together in an automation workflow? You get the best of both worlds. I've basically built a blogging platform that is as fast and convenient as a full-featured CMS targeted at non-technical marketers, with the simplicity and elegance of a developer-focused solution. The trick is that the AI models are the ones doing the engineering work.
I'm going to record more content on this because the blog platform itself is becoming a pretty interesting thing. And I can see it getting a lot more interesting from here.