This is my first blog post here, so bear with me.
I’m not a software developer. I know enough to explore: programming basics, some Python, a decent grasp of how systems fit together, the kind of top-level understanding that lets you sketch out what something should do without necessarily knowing how to make it happen. I’ve never written code for a living, and I’ve never particularly enjoyed debugging. But a few days ago, I built something real (using these modern AI tools for coding), open-sourced it, and now I’m writing about it. So here we are.
A friend and I were talking. The kind of conversation that happens when two people who are too into their tools get going. We both use Notion and Zotero, and we both rely on Notero, a community plugin that syncs your Zotero library into a Notion database. It’s genuinely great. But there’s one thing it doesn’t do: it doesn’t bring your PDFs over. The references sync, the metadata syncs, but the actual files stay stuck on your local machine.
We were discussing alternatives and workarounds because we wanted to take advantage of Notion AI as we did literature review or just wanted to review multiple references in our AI workflows. We did not find exactly what we needed. And then I said something like,
Technically, if it is super useful, we can just write a simple code, which will call the Notion API and do this for us. Should not be that complicated. We just have to figure out if it is worth it.
I’m the kind of person who can read a codebase and roughly follow what’s going on, who understands APIs conceptually, who can talk about architecture without being the one to implement it in a robust manner. I wanted to see if someone like me — not a beginner, but definitely not a developer — could actually build something useful for other people (I guess that’s also called ‘shipping’ code?).
I gave myself a loose constraint: how fast could I go from nothing to something that works? My advantage wasn’t coding skill. It was knowing how to think about the problem as a user who would want a solution for himself. I could break it down into clear steps, describe what I wanted clearly, and tell whether the output made sense.
I’ve been using the Codex app a lot recently, and I like it a lot, so I got started there. I used plan mode to lay out what I wanted, and it took less than 15 minutes to get something I was happy with. Then I let it finish up the code generation using GPT 5.4. Alongside that, I also used GitHub Copilot in VS Code, which I used to catch small things as I worked through the code.
Part of the fun, honestly, was figuring out what each tool is actually good at. Different models have different strengths, too; some are better at reasoning through architecture, others at cranking out boilerplate. I’m still learning the boundaries, and I tried not to just throw tokens at everything. I kept things intentionally simple: a command-line tool, no GUI, no unnecessary complexity. Connect to your Zotero library, match items with what’s already in your Notion database, upload the PDFs. That’s it.
Within about three hours, I had a first version running. It wasn’t clean, there were some bugs and edge cases I hadn’t thought through — but it worked. I could run a tiny command in my terminal and watch PDFs appear in Notion. That felt like magic, honestly.
Over the next couple of days, I kept tinkering, trying to package it up properly for new users. I added cross-platform support so it would run on Windows, macOS, and Linux. I worked on making the setup process friendlier, since not everyone is comfortable running things from the command line. I handled edge cases with large file uploads. I cleaned things up, fixed bugs, added ways to preview what the tool would do before actually changing anything.
Each of these small improvements taught me something. Less about code, more about the process of building. How do you make something usable for someone who isn’t you? How do you package and document a project so people actually want to try it? Funny enough, that’s closer to what I was decently good at: organizing, structuring, thinking about the end user. The gap was always in the implementation, not the thinking. And that gap seems to be shrinking fast.
I released it under the MIT license. It’s honestly still pretty basic, but it works, and that felt like enough reason to share it.
You can find it on GitHub: NoteroPDF
What I actually learned
The interesting part wasn’t really the tool itself. It was what the whole experience showed me about where things stand with AI-assisted programming.
I’m not talking about “AI writes all your code for you”, that’s an oversimplification and probably misleading. The reality is messier than that. You still have to make all the real decisions. What architecture makes sense? How should the interface work? What trade-offs are worth making? When something breaks, you have to figure out why, not just paste the error somewhere and hope.
This is where being a technically literate non-developer is actually an interesting position to be in. I could spot when something felt structurally wrong, even if I couldn’t always pinpoint the exact bug. That middle ground turns out to be a pretty good place to work from.
I handled the judgment. Knowing which tool to reach for, and when not to reach for one, became its own skill. Some choices led to bugs that took longer to untangle than the original feature took to build. What I actually took away from all of this isn’t about the code that got generated but it’s the decisions I made along the way.
I wanted to start this blog with something I enjoyed building. Not because it’s impressive, but because I think it’ll help at least one other person out there. I built a small thing that solves a small problem, and I put it out there. For someone who has never done that before, it feels like a big deal.
Maybe I’ll build more things. Maybe I won’t. But this was fun, and I learned more than I expected, and I think that’s reason enough to write about it.
Leave a Reply to Lawrence HiquianaCancel reply