· Cloud Development · 3 min read
Context is the Difference Between Slop and Shipping
AI amplifies what you bring to it. I brought a question and got five wrong answers. Once the underlying system was understood, the partnership worked and the work shipped. Context made the difference.

I was solving a genuinely hard problem. Parsing an OData V4 URL to CQN in Java. CQN is CAP’s query representation. It’s basically CAP’s SQL.
In Node this is trivial. service.parseUri() and you’re done.
In Java, Olingo handles the OData layer and reads your service metadata. But when I dug into how the CDS OData V4 adapter used it, the processor that built the CQN also executed the query and returned the response. There was no clean way to just get the CQN without triggering the whole stack.
In Node, the environment hides the iceberg. Dynamic typing. Late binding. The framework doing the lifting. It’s easier to ship without seeing the complexity, but that makes it equally easy to get AI to hand you the wrong answer that works. You won’t know the difference until it matters. You get slop with momentum.
The code you don’t have to write is the best code. service.parseUri() is beautiful because someone understood the iceberg well enough to bury it in an abstraction.
But mistaking easy to write for optimal is where the slop starts.
Those who understood the iceberg before AI will be rewarded for it. Those who never had to are betting they won’t need to.
In Java, the iceberg is the job. The friction is the understanding. And that understanding is what made the partnership work.
Before I wrote a single prompt, I did the archaeology.
- Captured the live call in the handler.
- Traced Olingo to edmx to CDS adapter to CQN select.
- Worked out where the boundaries actually were.
Then I brought that to Opus. Partnership clicked.
I know because I tried it cold first.
Five hallucinated APIs.
Then “not possible.”
Confident nonsense.
A context switch I didn’t need.
Most problems aren’t vibeable. You’re starting with physics. Real constraints in a real codebase. Execution paths. Side effects. Lifecycle hooks. Things that do not bend because your prompt sounds smart.
Physics doesn’t care how persuasive you are.
Greenfield feels like the easy case. No legacy. No constraints. But the physics isn’t in the codebase. It’s in the domain. Security models. Billing edge cases. Multi-tenancy. You don’t discover those by prompting. You discover them by understanding what breaks.
Most people cloning a SaaS see the product. They don’t see the iceberg. Take Jira. Everyone thinks the mess is the problem and that they can do better. Nobody accounts for why it’s that mess. Twenty years of bending one product to fit every workflow, every team, every enterprise edge case that turned out to be a compelling feature for others. The mess isn’t bad design. It’s physics. And we convince ourselves we can outrun it.
It looks like it works.
Until day ninety.
That’s when physics shows up and plausible fiction becomes production debt.
You have to know what can’t move before AI can help you find what can.
One warning.
Don’t ask Opus to fix failing tests.
It will make them pass.
By rewriting the test data.



