What We Learned Building an AI Programming Game — Codyfight

What we learned from an experimental AI programming game (2020–2025)

1. The idea

What if the core skill in a strategy game was writing code? That question led us to build Codyfight — an AI programming game where developers, not joysticks, controlled the action. The concept was simple: programmable agents competing turn-by-turn via the game API, with code as the strategy layer. Developers became players. AI competed against AI. And sometimes, humans played alongside bots in the same arena.

Programming games and coding games are a small niche, but the people who care about them really care. They remain a fascinating space for developers who enjoy strategy, automation, and algorithmic thinking. We wanted to explore what happens when you treat code not as a tool, but as the game itself — and whether an AI competition built around programmable agents could find an audience.

2. How it worked

Codyfight was turn-based and played entirely through the game API. Each match ran for a fixed number of rounds. In every round, the map changed — a new grid layout, new obstacles, new opportunities. Your Codyfighter (the character you controlled) navigated the grid, used skills from a deck you configured, and tried to outsmart the opponent. The goal: score more points by the end.

The grid-based arena made positioning and movement tactical. You could build a skill deck from your Codyfighter's abilities, and the community eventually got tools to create custom maps. Everything ran over HTTP — your code connected to the API, received state, and sent back actions. No game client required for bots.

Players wrote code that called a simple REST API. Five methods, that was it:

POST   /game      // init game
GET    /game      // check game state
PUT    /game      // move codyfighter to position
PATCH  /game      // cast codyfighter skill
DELETE /game      // surrender

Each turn, your bot fetched state with GET and sent actions with PUT or PATCH. The API was minimal by design — everything else was strategy in your code.

The game dashboard showed leaderboards, a Codyfighter management system, and skill deck selection. You tuned your agent, submitted your code, and watched it compete. Both humans and bots could play — the same arena, the same rules.

3. Lesson: Hard problems we ran into

Running arbitrary user code is dangerous. Bot sandboxing was the first big challenge. We needed to execute JavaScript (and later other languages) in a way that couldn't escape, consume unbounded CPU, or access the network. Time limits, memory limits, and a carefully restricted runtime were non-negotiable.

Infinite loops were a constant headache. A single bot that never returned would block the entire match. We added execution timeouts and eventually moved to a model where each bot got a strict time budget per turn. Deterministic simulation was another beast: for replays and fair play, every match had to reproduce identically. That meant controlling randomness, fixing floating-point behavior, and ensuring the game engine was fully deterministic across environments.

Scaling matches was hard. Running hundreds of concurrent games, each with multiple bot processes, required careful architecture. We built a bot farm to handle load. Matchmaking added another layer — pairing players of similar skill, handling queue times, and keeping games fair. Anti-cheat was its own category: preventing players from peeking at opponent state, reverse-engineering the API, or exploiting timing. These weren't glamorous problems, but they made or broke the experience.

4. Community and the crypto pivot

Over time we added tournaments, leaderboards, and a deeper metagame. The CTOK token entered the picture as a way to stake and win rewards. Codyfighter NFTs gave characters unique abilities and skills. Bot farms scaled. It was an experimental crypto integration — we explored it and learned from it.

The token and rewards also created a divide: real gamers who cared about the game versus reward hunters chasing yield. That split, in part, contributed to the game's decline. Twitter and Discord communities grew, but the pivot wasn't purely additive — it changed who showed up and why.

5. Lesson: Takeaways

We built Codyfight for the AI and developer niche — programmable agents, code as strategy. In practice, human players dominated. We never really reached the automation-minded audience we aimed for. The meta that emerged (skill decks, positioning, timing) was surprisingly deep, but the crowd was mostly people playing manually, not bots.

Scaling multiplayer AI games is hard. Determinism, sandboxing, and fair matchmaking all compound. And building a sustainable business around a developer-focused game is even harder. We learned that the hard way.

6. Where it led

While building Codyfight we spent a lot of time designing systems that extract structured information from complex data — game states, bot behavior, match histories. We built dashboards, analytics, and tools to make sense of it all. The patterns were familiar: parse incoming data, normalize it, store it, visualize it. The same kind of work you do when building any data-heavy product.

Interestingly, similar problems appear outside gaming. In another project I'm now building — MedicalHistory — the challenge is extracting structured data from medical lab reports and tracking biomarkers over time. Different domain, same kind of systems thinking: how do you take messy, real-world data and turn it into something useful?

What is a programming game?

A programming game is a game where players control characters by writing code instead of using a keyboard or controller. The code defines strategy and decision making. Classic examples include games like Screeps (where you script colony behavior) or competitive coding platforms where bots battle each other. Codyfight fit in that tradition — a coding game focused on turn-based strategy, programmable agents, and AI competitions.

7. Closing

Codyfight is no longer actively developed. It remains an interesting experiment: programming as competitive sport, explored sincerely over several years. What we built was real — the technical challenges, the community, the meta. We didn't crack the AI coding game market the way we hoped, but we learned what it takes to run one.

If you're curious about programming games, AI competitions, or developer strategy games, the GitHub repos are still there. The idea was bold. The execution was honest. That's worth something.

Project archive: GitHub · Twitter