Confident Coding: How to Write Code and Futureproof Your Career in the Age of AI

For 25 years, software engineering has been more than just a job for me—it’s been my passion. My journey took an exciting turn three years ago with the advent of LLMs like Copilot. There’s simply no turning back to coding without these incredible tools. Initially, many dismissed them as mere novelties, useful only for simple tasks. However, for the past two years, I’ve been deeply immersed in building professional-grade code, harnessing LLMs as far more than just enhanced autocomplete. They are game-changers for complex, professional codebases. We’re just beginning to scratch the surface of how statistical models can revolutionize not only intricate code segments but the entire software engineering lifecycle.

Imagine this: you’ve just wrapped up a software architecture brainstorming session. What if you could feed that transcript into Claude and have it instantly converted into GitHub tickets? Then, transform those tickets into YAML, write a script to push them to the GitHub API, and wrap it all in a web server? Extend it further with Google Drive integration and Slack notifications. With experience, you can automate this entire workflow in just an hour or two – turning spoken ideas into a fully automated system. Will there be minor AI hiccups? Perhaps. But I’d gladly spend ten minutes refining a few tickets, gaining back hours to focus on high-level architecture discussions.

The real paradigm shift that many are missing is how LLMs amplify established software engineering best practices. To effectively leverage LLMs at scale, meticulous and consistently updated documentation, references, and tutorials become paramount. This ensures that all necessary knowledge for a task fits within the LLM’s context window. Consistent APIs that prevent errors become crucial – ideally, so intuitive that the code practically writes itself. This is the goal. Robust linting with clear error messages is essential because feeding these back to the LLM often resolves minor issues autonomously. Comprehensive unit tests and tooling, along with structured logging, become invaluable feedback loops for the LLM. These practices, vital for LLMs, are equally beneficial for human developers because LLMs are trained on the very human language we use to communicate with machines.

My approach to coding with LLMs always prioritizes the human element. Creating documentation that is genuinely useful for humans is, without a doubt, the most effective way to generate AI-relevant documentation. And if a task proves too intricate for the LLM, we still have excellent resources for human developers.

Consider a scenario with a messy legacy microservice burdened with convoluted JavaScript, incomplete CloudFormation scripts, a disorganized database schema, and inconsistent logging. Here’s how I’d tackle it with LLMs:

  • Initiate by asking Claude to generate an architectural overview and a Mermaid diagram from the CloudFormation, AWS CLI outputs, and AWS console screenshots.
  • Dedicate some time to refine the initial output.
  • Instruct Claude to transform the cleaned-up ARCHITECTURE.md into a Terraform module, making necessary refactors (LLMs excel at these ~1kLOC tasks).
  • Request Claude to create a tutorial on maintaining and deploying the Terraform setup.
  • Have Claude develop a CLI tool to monitor the application status as defined in ARCHITECTURE.md.
  • Set up monitoring with Sentry, Datadog, or Honeycomb for the application, again using ARCHITECTURE.md as the blueprint.

Alternatively, consider database refactoring:

  • Start by asking Claude to create an architectural overview and Mermaid diagram based on a “DESCRIBE TABLES” database dump.
  • Refine the initial diagram.
  • Ask Claude to suggest cleaner database views. Iterate and refine until satisfied.
  • Have Claude generate a DBT project to manage these views.
  • Request a tutorial on installing DBT and querying these views.
  • Develop a dashboard to visualize key metrics from DATABASE.md.

This methodology isn’t theoretical; it’s practical and effective today. It’s how I operate in both my open-source contributions and professional projects. My output has increased dramatically, perhaps by 20-30x. I can now accomplish in a single day what used to take weeks, and at a higher standard of quality—better documentation and tooling—than I could previously achieve due to time constraints. Importantly, this enhanced productivity comes without sacrificing work-life balance.

Mastering this approach requires practice, but it’s not unattainable. There’s no doubt in my mind that software engineering is undergoing a fundamental shift. The demand for software will not decrease; in fact, AI empowers more individuals and organizations to access high-quality software. However, the nature of my work—writing code—is evolving. Now, I code for leisure, to create tutorials on subjects I’m passionate about, often disabling Copilot to enjoy the process as a hobby.

For a deeper dive, check out the workshop I conducted earlier this year: https://github.com/go-go-golems/go-go-workshop

And for more insights, read my recent blog post expanding on these ideas: https://llms.scapegoat.dev/start-embracing-the-effectiveness…

Embrace these changes, hone your skills in AI-assisted coding, and futureproof your career in this exciting new era of software development. Confident coding in the age of AI is about leveraging these powerful tools to amplify your abilities and redefine what’s possible.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *