How to prevent a robot uprising with types

I've been seeing more and more people driving parallels between developers and AI coding agents. Especially in the context of developer experience there is a common notion: what's good for developer is good for AI.
Why is that? I can show this on a specific area — configuration.
A day of life of a new developer
Vasya is a new developer who just graduated the university and joined to build a cool new product for delivering food with robots. He's very excited to start working, and team lead gave him a small onboarding task to get a better understanding of the project:
Update delivery robot configuration parameters
For your onboarding task, update the robot delivery configuration to improve neighborhood coverage and service reliability:
- Increase the search radius to 5km;
- Update the retry delay for our order service to 1 hour - this will reduce the number of order being dropped or reassigned;
Ask your teammate for docs on the config, it's located in svc/skynet/configs/robot.json
.
Pretty simple task, right? Let's see what Vasya will do.
Updating the configuration
In this company people configure services via JSON, it's easy and straightforward to setup.
Vasya got the direction from team lead on where to find the file, but he asked him to ping a teammate for the docs on which parameters does he needs to update. Later during the day, he got the docs from the teammate and updated the config:
Before:
{
"order_area": {
"radius": 1000000,
},
"order_server": {
"apiURL": "https://orders.robotdeli.example.com",
"retries": 3,
}
"retries_delay": 1800
}
After:
{
"order_area": {
"radius": 50000000,
},
"order_server": {
"apiURL": "https://orders.robotdeli.example.com",
"retries_delay": 3600,
"retries": 3,
}
}
Vasya sent this update in a PR to his teammate who was in a hurry and just stamped the change without properly looking.
...
In a couple of hours after the commit was pushed to production, team receives the task from support:
All robots are moving to the same area in the city center!
Multiple people have reported on social media that our robots are moving towards the downtown, we need to address this immediately.
Everyone was shocked: is this the feared robot uprising? Team scrambled to look through the changes and found the config change: Turns out the area of search was increased to 50 kilometers which covers the whole city and the majority of orders are in the downtown!
They've pushed the emergency update to robots to reduce the radius, but nothing changed. Robots didn't request the new orders from the service again!
Developers looked at the config one more time and saw that retries_delay
was
moved. Checking the code revealed that the default value is 0 which meant no
retry until the next order.
After the last bug was finally fixed, the robots started returning to their designated neighborhoods but some of them needed to be picked up because they were too far away from the charging station, which costed the company some extra money.
What about Vasya? He didn't get any punishment for this, apart from staying and fixing the issue, although, by the company tradition, his teammates took a selfie with him as the guy who caused the biggest production outage (so far).
What went wrong
Remember, Vasya requested the documentation for the config before making the change.
Turns out the documentation wasn't in sync with the code: The
retries_delay
field was moved outside of order_server
, but in the doc it was
still inside which seems more straightforward to Vasya.
For the radius - due to some historical reason the team set it in centimeters, and it is easy to miss a zero in this case.
This kind of documentation dance happens every time with the configuration, and this is the perfect time to segue to AI.
AI perspective
If this change was made through LLM, it would've end up with the same mistake in
retries_delay
, and for some models even for radius. The model would likely
have been trained on an old version of documentation, especially if it is an
external library, or in a large codebase where it would be hard for the model to find
the exact place for the configuration.
How can this be addressed? Enter typed configuration:
Let's see how the same configuration looks with Typeconf:
// Type definitions
model OrderRadius {
value_in_cm: int32;
}
model RobotConfig {
order_area: {
radius: OrderRadius;
},
order_server: {
endpoint: string;
retries: int32;
},
retries_delay: duration;
}
// Values
const config: RobotConfig = {
order_area: {
radius: OrderRadiusHelper.createFromMeters(5000)
},
order_server: {
endpoint: "https://orders.robotdeli.example.com"
retries: 3
},
retries_delay: "100ms",
};
export default config;
With types it is clear for both LLMs and humans what needs to be updated:
- It's easy to validate the params - the source of truth is always close and easy to retrieve for LLMs;
- Typescript compiler will catch it and LLM will learn in a feedback loop automatically;
- You as a developer can check the types and get the fields right from the IDE autocomplete;
- The validation can also be added to the CI;
There are an increasing number of services adopting this type of configuration and Typeconf can be a good framework to add it in your product.
Check it out on our GitHub: https://github.com/typeconf/typeconf, it's fully open-source.
Next week we'll release our MCP, sign up for our newsletter so you don't miss it!