I have argued before that I believe the real gain from AI in software engineering is not only in code production. GenAI is definitely a useful tool for coding, but coding is not where the bottlenecks are. In order to be effective, not merely efficient, in coding, design of the software being written is crucial. I think it’s already pretty much a consensus by now that in order to be really productive with coding agents, you need to carefully direct them, and of course provide the proper context.
Proper context is more than just good requirements/specifications. These might be good enough for greenfield projects where we’re starting from scratch. The reality, however, for many existing companies and projects is that our starting point is much more muddy than we’d like, and simply connecting an AI to it isn’t enough. A system with hundreds of separate services communicating to implement different business flows and user interfaces is hard to follow whether you’re the human who built it or a supercharged AI agent that understands code perfectly. Adopting AI effectively in such circumstances is not just letting the AI tool (e.g. Cursor, Claude Code) read and index the code. It’s definitely an important pre-requisite, but it’s not enough.
Any design methodology would at the very least require us to have knowledge of the existing system and processes it implements. Otherwise we’ll be “stuck” with generic advice which often becomes useless pretty quickly1. When dealing with a complicated system we have to let the AI investigate on its own if we want it to help us with the design. This complicated internal knowledge, often domain-specific, has to be made available to the LLMs2 if we have any hope of the AI helping with the design.
Note that this isn’t an AI-only problem. I’ve often encountered the situation where there’s, at best, a single engineer who remembers why a certain flow is implemented in a certain way, or why there are two separate endpoints that implement pretty much the same logic. It’s a human problem as well. Only as humans we compensate for it by relying on tribal knowledge: old emails and Slack threads. This might be an option in some cases for an AI as well. But at best it is very inefficient.
On top of this, a lot of times, the reality of modern business software is that of a distributed architecture, with hundreds of services and legacy code coexisting with more recent rewrites. Cross-service flows can become very intricate, and they are often undocumented. Even when the knowledge exists (in someone’s head) it’s hard to puzzle things together, and practically impossible for an AI agent to understand it without proper architecture context. Humans can eventually trace flows, but they rarely document them. AI agents can probably do something close to that, but it’s very inefficient, both in running time and token cost.
If we want AI to design features, troubleshoot issues or help us in assessing impact of changes, we have to help it understand how the system fits together. The need existed well before AI took the stage, but LLM-based tooling both highlights the gap as well as offers a path to solve it. Humans are traditionally bad at maintaining documentation reliably. But given the right tools and direction, AI can also help in creating and maintaining the relevant documentation.
This is what led me to the Application Architecture Hub.
The Goal
The primary goal is pretty straightforward. Build a knowledge base that AI agents can query to understand system architecture. When an agent needs to design a feature, it should have context about existing patterns and dependencies. When an agent traces a bug, it should know which services participate in the flow. When an agent assesses the impact of a change, it should understand what depends on what.
We already know LLMs can read and code and write documentation. Not only that, they do it repeatedly, consistently and tirelessly.
If we design the extraction and documentation process well, we can have agents that produce documentation that is actually useful. Not just generated API docs with lists of endpoints3, but actual structured documentation, semantically summarizing the code, with citations back to the actual source code.
In this sense, AI works much better. A human who goes through source code listings can spend hours building a mental model of relationships between services4. An agent can produce a structured summary in minutes. Given the right extraction prompts, it can produce meaningful descriptions, in a consistent format. And this can of course scale across hundreds of repos. Contrast this with humans documenting different repos, bringing in their own style, preferences and assumption on what matters. This results in inconsistency that makes it very hard to reason and correlate across services.
LLMs also make incremental updates easier. With the ability to compare (“diff”) the current state, identify what has changed and make changes only in the necessary sections. AI agents don’t get bored or decide that updating documentation is not a priority and can be pushed to a later sprint5. Humans rarely sustain this over time. They might invest initially, but entropy will win.
So my goal here is: have a living knowledge base where AI is used both to maintain it and consume it – AI agents are the prime consumers. Agents can query the hub to understand the system, as well as extract information and keep it up-to-date.
It’s important to note that it turns out that humans (unsurprisingly) also need this. As I noted above, the introduction of LLMs to coding and design did not invent the problem of understanding the system. And given up-to-date structured documentation, with AI helping to query it, humans find it useful as well.
AI-generated documentation isn’t a groundbreaking concept. What matters here is for this to be relevant and with high quality to the relevant use cases. The thought here is that AI-based documentation, with proper engineering about extraction process and relevant tooling, can outpace human-maintained documentation. This is not because AI is smarter, but because it is smart enough, and tireless.
Designing the Architecture Hub
Even though it turns out the architecture hub is useful for humans, the driving force behind the design was consumption by LLMs and tools driven by LLMs. Even when humans use it, they do it using LLM-based tools.
Initially, I started researching and thinking about achieving scale – graph databases, maintaining large collections of documents, specifying potentially complex ontologies of objects.
I can’t rule out the usefulness of these techniques just yet, but I quickly came to realize that I was prematurely optimizing6.
So I quickly pivoted to starting with a much simpler approach. The architecture hub is, for now, a simple Git repository. It’s not a code repository with implemented business flows and tests. There are no deployable artifacts. Instead it maintains a series of markdown files organized consistently into several directories.
This in itself already allows for simple consumption – AI agents can easily read markdown files. It’s also easily reviewable and usable by humans. Combined with a github MCP server, or simply cloning the repo locally, any AI agent can easily access the information.
The “unit of ingestion” is a single code repository. These usually already encapsulate a specific logic, and are easy to follow and build the tooling around.
Architecture Facets
We could have a single file per repository, describing each repo in detail. But this easily gets too large and unfocused. Different tasks (by agents or humans) require different types of information. For example, tracing a bug requires understanding events and call flows; assessing impact of changes requires understanding dependencies. Having a single giant file would mean that an agent would have to load everything and burn tokens on information it doesn’t need. It could easily pollute the context. Instead, I decided to structure the hub around different facets of the architecture.
The application architecture hub is structured around simple file system directories containing the files. Each such directory represents a specific perspective (a facet) of the architecture (APIs, domain models, events produced/consumed, etc.). A directory contains one markdown file per code repository ingested, they all have a consistent template with consistent metadata. This is a consistent and predictable structure that is also easy to describe.
| Facet | What It Documents | Questions It Answers |
| Domain | Data entities, relationships, types | What data does this service manage? How is the data structured? |
| API | Endpoints, request/response contracts | How do I call this service? What functionality does it offer, if any? |
| Events | Message topics, payloads, producers, consumers | What does this service emit or consume asynchronously? |
| Frontend | Frontend applications: state management, components, routing | How does the UI work? |
| External Dependencies | Databases, brokers, external services | What components and external services does this service depend on? |
| Dataflow | Inputs, transforms, outputs, sensitive data | How does data move through this service? |
The list of facets is stable and aims to document interesting aspects that often come up during design, and allow us to ask more complicated questions. It can of course be extended to include more aspects.
The design is therefore simple: one file per repository (usually named after the repository name), per relevant facet7. If you need to understand the HTTP API exposed by the payments service (from a repo called “payments”), you simply look for `api/payments.md`. If you need to see which events this same service emits, you can look in `events/payments.md`. This is a simple to follow structure, both for AI and humans.
Dividing the information into different files has other benefits beyond simple context window efficiency:
- Easier to search (e.g. using grep) for specific facet information across repos. Remember that our prime motivation is system wide patterns (cross-repo).
- Parallelism: it’s easier to divide work across sub-agents when they can ingest and search on separate file directories.
- Incremental updates: updating a changed API usually does not require updating the domain model information, or external dependencies.
Note that searching the files does not exclude searching the code as well. In fact, the extraction takes care to maintain explicit code references. And when querying the hub I often find myself asking the agent to start from the architecture hub, but also use the git tools (either MCP or github CLI) to look into the specific code, based on the citations.
The use of a simple Git repo derives other immediate advantages of dealing with textual content – it’s versioned and easily reviewable. It’s easy to see what gets updated and when.
The flow at a high level is therefore:
Ingestion Pipelines
How does ingestion – creating or updating documentation – work?
As noted above, the main unit of ingestion is a code repository. Each code repository is ingested in turn, and the created artifacts reflect the original code repository. This allows us to debug, retry and review specific repos, and tie the ingestion into already existing CI processes. We don’t need to invent new relationships or mappings of code repositories to artifacts. It’s also easier to query specific code files using the hub as the guiding index when necessary.
Technically, we implement the extraction process as a series of agent skills: structured prompts with accompanying templates and scripts. These guide the extracting agent what to look for, how to search the codebase and the format of documentation file to produce.
Why skills?
Besides being text-based and therefore easily version controlled, skills allow us to leverage the LLM’s built-in capability to understand the code and its semantics. With a good enough LLM an agent with a skill can produce consistent results. We do use scripts for basic understanding of the hub (e.g. the repos already ingested), and we can probably optimize with scripts that parse the code deterministically (similar to static code analysis), but we’re starting simple, with an implementation that doesn’t require any extra runtime agent beyond the running agent(s).
Each facet has two skills – one for extracting the facet from scratch, and a skill for updating the documentation. The update skill compares the change in the code against the current documentation state and only updates what’s changed. Full re-extraction is possible, but seems too expensive.
The skills define what to look for, depending on the facet they’re documenting. For example, the API skills look for HTTP controllers and decorator (we’re mostly NestJS-based); the event skills look for message schemas; the dependency skills look for definitions of connection strings, external endpoints, etc. All skills have a template they follow, so outputs are uniform in structure. All templates include a metadata section (repository url, date of ingestion, git commit sha of the repo at the time of extraction).
The ingestion pipelines themselves exist in two versions: remote and local. The difference is in how they use the data.
The remote version accesses the ingested code repo by using Github MCP server. It does not require a local clone, and can effectively work from anywhere with the proper credentials set up.
The local version uses git CLI to clone the ingested code repo locally to a temporary directory and then reads the code locally using file system tools. The local version is generally cheaper and more reliable. It does require more disk space.
In addition to producing the documentation files, the ingestion agents also update an existing llms.txt file, which serves as the hub’s index. This is a plain text file, listing all the different documented repos, and explaining the structure of the architecture hub.
The querying skills guide the agent to first look at this file, understand the hub’s structure and start the lookup from this point. Since the repository structure is simple, the llms.txt file structure is simple – one line per document created, with a simple one line description of the content, divided by the facets.
This makes locating documentation across different axes simple enough to use with a simple grep. For example, looking for all domain documentation is a simple search for `domain/*.md` in the file, and getting a list of results. Similarly, looking for all information about the reservation service is simply grepping8 for `*/reservations.md`.
Ingestion itself can be triggered manually by any user (a github action invoked in the Github UI or a script). It can also be invoked by a CI step (non-blocking) that is triggered on every merge to master/main – we want to update our documentation, but only the changes that make it to the main branch.
The whole process is orchestrated by a single orchestrator agent (implemented as a skill as well), which launches sub-agents – one per facet.
The orchestrator takes care to clone the repository if needed, and then invokes the separate subagents to either create or update documentation for each facet independently:
The motivation for launching sub agents comes from two main drivers: resiliency and latency. Since the work of each subagent is independent, they do not interfere with each other – all of them just read the code and write independent files. They are invoked in parallel, so the overall process terminates earlier. Also, failure in one subagent does not cascade to others. Technically it also means that the skills for each facet are separate, and therefore simpler – less room for LLM mistakes. A single facet failure is also easier to troubleshoot and re-run if necessary.
Note that it is the orchestrator agent that updates the index (llms.txt) file. Technically, each subagent can update the index file on its own upon completion. But since this is a shared resource, we run into overlapping write conflicts. Since this is file system-based work it’s easier to instruct the agents to return the result of their work as their output, and have the orchestrator agent update the index file. Updates to the shared resource then happen in one place – the orchestrator – and we avoid conflicts.
The ingestion itself can be triggered manually or as part of an automated process, e.g. after a successful merge and build of the master branch. In either case, the ingestion stops at creating a PR that can be reviewed by a human. Review by humans is still important, both to account for inaccuracies (which hopefully will be reduced over time), but also so people learn to trust the information. Without reviewing errors that are still possible at this stage, errors will accumulate, and trust will erode. It’s important to have this level of trust in the process.
Querying the Hub
Once we have the documentation in place, we can start querying it.
Generally, the querying process is simply prompting an agent to read the documentation and construct a report:
Identifying the relevant facets and extracting necessary information, including correlations across different documentation files is where we let the LLM apply its reasoning. We just take care to have a consistent structure, with enough information.
We have several “query” skills which instruct the agent to look in the index file, and some other technical layout information. They also instruct the agent to cite its sources. This helps to both reduce hallucinations as well as provide the result consumer (human or AI) with pointers to source material. The actual querying and output really depends on the use case and the query issuer.
The query itself can be by a human user invoking some AI agent with a user interface (e.g. Cursor, Claude Code or some chat interface with access to the file system). And of course, it can be some other agent-driven process which is simply given access to the files. I have used the architecture hub as a context directory for a dialectic-agentic design debate – it works9.
There is no specific query language – we let the LLM interpret the query and work its way through the documentation. We can of course provide hints (“look at the ‘reservation’ service”), but this is not mandatory.
Examples for ad-hoc queries:
- “Which services consume the financial-related events from the ‘financials’ service?”
- “What overlap do we have in domain models between payments service and reservation service? And why”
- “Who is calling the accounting service?”
Technically, the query skill comes in 3 variations:
- Remote: querying the hub using Github MCP server
- Local: querying local file system, assuming the hub is locally available, and up-to-date.
- Auto-Local: similar to local just takes care to first clone/pull from the architecture hub’s repo to a temporary local directory in order to make sure information is up-to-date.
Note that we can also instruct the agent to continue looking into the actual source code if our requested analysis needs this. Having the Github MCP available (or code locally cloned) makes further investigation into source code only a tool call away for the agent. The documentation in the hub does not replace code indexing, it’s more about bridging between (technically) disconnected repositories and mapping/deriving semantic relationships where they exist. There is little value in trying to replicate the existing code indexing and understanding already performed by current coding agents and tools.
It’s interesting to see that even when humans query the hub, it’s done using AI agents. In fact both the producer and consumer of the hub is AI, also when directly instructed by a human user. It’s LLMs that produce the documentation, and LLMs that consume it. This opens the possibility also for an ingesting agent to verify itself simply by querying the hub for the changes it just introduced. By itself, it might not sound that interesting, but considering the scale makes it a bit more interesting. Maintaining technical documentation, with appropriate quality, now becomes a purely mechanical process that can scale more easily.
Structured Reports
Beyond ad-hoc queries, the hub supports reusable report templates. A report template is simply a prompt file, meant to be used with the query skill, that guides the agent through a more complicated analysis workflow. It specifies what to read, what to search for and how to format the output.
Using a report is simply prompting an agent with something like this:
Using the local query skill, follow the report instructions in ./reports/dependencies.md for the reservation service as the root service.
Output your result to ~/tmp/dependencies_reservations.md.
This now launches the agent into looking into the documentation, mapping out services and their dependencies and producing a complete report with relevant pointers to source code.
An investigation that could take hours or sometimes days is done in minutes10.
We currently have several such predefined reports, each useful in different cases.
Dependency map
Given a specific service, map out all other services making API calls to it, and what other services it calls. It also maps out events produced and consumed by the services, as well as services sharing the DB11.
Useful when trying to estimate the blast radius of a given change.
Cross service flow analysis
A flow analysis traces a business process end-to-end across multiple services. The agent follows API calls, events, and data writes across service boundaries. The output is a sequence diagram plus a step-by-step breakdown with source citations.
“Trace the order cancellation flow” produces a sequence diagram showing the user request hitting the order service, the order service publishing a cancellation event, the payment service processing a refund, the notification service sending confirmation. Each step cites the documentation that describes it (which in turn cites the source code).
“Plain English” Flow Explainer
Not everyone reads technical documentation. Product managers and stakeholders need to understand flows without wading through event topic names and API paths. The plain English explainer produces a narrative description of a business flow. No technical jargon. Just a story of what happens and why. But it does it based on up-to-date technical documentation – the code is the truth.
Example output:
"When a customer cancels an order, the system first checks if the order is eligible for cancellation. If eligible, it reverses any payment charges and releases held inventory. The customer receives a confirmation email with the refund details. The host receives a notification about the cancelled booking."
This report is useful during discovery and planning. When a product manager asks “how does X work today?”, you can point them to the hub instead of scheduling a meeting with an engineer.
This report specifically also instructs the agent to use the web search tool to search information in other online resources (help center), which demonstrates the flexibility of the model. This is not a built-in feature of the architecture hub, just a tool available in the underlying platform that is composed into the process using the prompt. In my view it’s an interesting case of the “Application Logic Lives in Prompts” principle of agent-driven applications.
Also, the report essentially produces very similar information to the “Cross service flow analysis” report, only phrasing it in a way that’s more suitable for a different audience – another demonstration of a feature that is easily enabled by LLMs.
So How Do We Use It?
Regardless of the actual query being performed, we already see the value here: answering quick questions as well as generating more complicated reports, with deeper analysis.
For AI Agents
AI agents used in software are the primary intended audience here.
Several notable cases where this is used:
- A troubleshooting agent that brings together information from bug reports, live monitoring data (logs, datadog) but also interacts with the architecture hub to understand relationships between services.
- Design tasks and understanding impact of changes
For Humans
Information gathering was a pain before the introduction of AI coding agents. The simple fact that we have up-to-date technical documentation already allows us to use it daily.
Examples:
- Onboarding to a new code repo – whether it’s new employees getting to know the system, or simply a neighboring team needing to make changes in a repo they don’t own. Understanding dependencies, call patterns and domain models.
- During planning: understanding impact and inter-team dependencies.
- Mapping customer inquiries (specifying required data objects) to the APIs that provides them, across the system.
- Quickly figuring out cross-repo dependencies in live design discussions; e.g. “what services consume these events?”
- Understanding complex flows and data dependencies.
We also foresee more cases where this can be used: PR reviews, incident investigation, understanding compliance issues.
Anything that requires system-wide information that is reflected in the technical architecture.
It’s important to note what the hub should not be used for. It should not be used for understanding code or functionality of a single repository (or very few loaded into a workspace). At least not as a primary source. There are also better ways to understand the evolution of repos (git history). Rationale for designs should probably also be gleaned from other sources if they exist, using the hub as a way to validate decisions and track adoption.
Code tells you what happened, Git tells you when it happened, design documents and plans describe why things happen. The hub connects these perspectives across the system, and serves as a map to navigate the terrain.
Challenges and Roadmap
I would be misrepresenting things if I presented this as a fully solved problem. There are still remaining and expected challenges ahead.
First, staleness of data.
Stale documentation is in a way worse than non-existing documentation since it may mislead people (and LLMs). Code changes after initial ingestion, and documentation needs to be updated.
As it currently stands, the automated CI workflow is an opt-in solution (teams need to enable it via a simple Github flow variable set to “true”). But this is a limited rollout period. Once we make sure everything works, and figure out kinks, we can flip the condition and make it an opt-out solution.
Additionally, each update records the time of the update, and each file contains a change log. So it should be easy to spot documentation files that are not up-to-date.
Second, there is a quality variance. And this depends largely on the quality of ingested code12. Messy code with inconsistent patterns produces worse documentation. Code that is consistent, with known patterns and proper naming conventions is much easier for the LLM to understand and build the documentation for. The extraction skills look for API controllers or type definitions or configuration files in specific places. If the code doesn’t follow these conventions, the quality of generated documentation will degrade. We will fine-tune the extraction over time as we observe this, but this is largely a reactive measure.
Related to this is the problem of potential hallucination. Even though hallucinations are generally decreasing, at least with frontline models, this is still a potential issue, especially when an LLM is asked to describe the purpose or intent of a specific feature. As we know it might make assumptions and present them confidently as facts. One way to mitigate this is by mandating citations of source code. This focuses LLMs on grounding their outputs in the real code. This seems to reduce hallucinations; and it also enables humans to more easily review and cross reference findings.
Another issue that might come up is cost. Running LLMs at scale will cost us money. This is the main reason for having a separate “update” vs. “full ingest” skills – it updates only according to changes it finds instead of re-producing the entire file. We’ll need to monitor this and see how things can be optimized if necessary, e.g. batching a few changes and re-ingesting only after a few commits/merges.
Related to cost is the general issue of scale, when it comes to quality of service. What happens when the hub includes hundreds of documents? How long will it take to query it (even when done on a local file system), and how good will the result be?
We may very well need to adopt a more scalable solution, e.g. a more scalable database, and not relying on file system searches if we want faster answers to more (concurrent) users.
Perhaps the hardest hurdle to overcome is that of adoption. In order for this to be adopted internally it has to be better. Not marginally better – clearly better. So far the response has been positive by people who have seen it. And effort is being done to make querying easier and as painless as possible.
Some future thoughts involve also providing a mechanism to give feedback and local notes (inspired by the `annotate` and `feedback` commands in chub); but this is not implemented yet.
Adoption of course needs to be not just by humans querying it, but also by internal AI agents using it.
Beyond Initial Implementation
Currently the architecture hub has a solid foundation, and shows value. But there’s still work to do, some obvious, some less so.
In the short term, we need to increase coverage of all repos. This is more of a technical gap.
We will also need to fine-tune the extraction skills and associated templates. Some feedback is already incoming. The same goes for pre-defined reports.
After that we’ll need to make sure this is adopted by AI agents. In a sense, the application architecture hub should be part of the default context for all technical agents doing design, troubleshooting, and planning. This will require more standardized interfaces for querying and reports.
Another important step – ingesting more relevant information sources. Two immediately relevant sources are infrastructure information and design decisions (ADRs). This will enrich the available information and allow us to answer and connect information in different layers of the technical architecture – all the way from “why was this designed this way?” to “how is this actually deployed?”
But other interesting architectural aspects may be interesting as well. For example, a security facet, mapping out authentication and authorization information as well as data sensitivity aspects. This can help agents with understanding and designing for secure software, consistent with the rest of the system.
As noted above, having a feedback mechanism is also very useful in order to have a continuous improvement, hopefully grassroots, that will maintain and improve the quality of information.
Other steps might include (depending on need) introducing semantic search (RAG?) so we avoid issues with terminology misalignment, or having the user know the exact repo to start with.
When it comes to accessibility to larger audiences, not so much AI agents, a visual explainer – automatically producing diagrams can prove to be useful for humans who need a living, breathing, map of the system.
Takeaways
The architecture hub started from a simple observation13: AI agents are great at understanding code (and getting better), but larger systems, with a lot of moving parts are harder to accommodate reliably in one agent’s context window. Knowing how services interact, where data flows, how changes propagate – this is intractable in a large distributed system. If we want AI to go beyond simply coding, we have to teach it what we know. Knowing the system was a problem even before AI came along. LLMs just exposed the gap and made it more obvious. We got hungry for more.
But given the right mechanisms and tools, LLMs also present a solution. We can now generate and update reliable technical documentation at scale, simply because it’s mechanized.
LLMs emphasize the need and present the solution at the same time. In this system, AI is both the consumer and maintainer of architectural knowledge.
There are already some interesting points to learn from this (still ongoing) journey:
- For this to work, the extraction process needs to be engineered. We need to make sure the quality is high and that it can scale technically and organizationally.
- Architecture is built on different aspects. Having one document cover everything is hard, and inefficient. The idea of different facets is important for effectiveness as well as efficiency.
- Humans in the loop are important to understand errors, but also to build trust in the system. We’re trying to extract years of human-generated knowledge (in the form of code) and let machines run with it.
- The value is in the query. The documents themselves are great, but AI and people need answers. The hub’s main value will come from delivering answers; documents are just the substrate on which this is built.
- The original motivation (and still the main one) is for AI coding agents to consume the knowledge. But this is also extremely helpful for humans. It so happens that having reliable documentation, with consistent templates and explicit citations is useful for humans as well.
I’m betting that AI-maintained documentation can outpace human-maintained documentation. So far, feedback has been positive.
But the real test will come with adoption. When people and agents use the architecture hub as the first place to look for information.
(and yes, all dashes in this post are hand-typed)
- This was also, unsurprisingly, one of the conclusions from the testing of Dialectic. See “Does Clarification Matter?” here. ↩︎
- That would be what I called the 2nd phase in a possible AI adoption roadmap. ↩︎
- Which is also useful of course ↩︎
- HTTP calls, domain models, events raised and messages consumed, … ↩︎
- We all know the “Documentation” work item that gets pushed across sprints until it’s simply marked as obsolete. ↩︎
- And I’m not sure about the root of all evil, but it’s a surefire way to get stuck in analysis-paralysis. ↩︎
- For example, backend services are irrelevant for frontend applications. Similarly, frontend applications don’t expose HTTP-based APIs. ↩︎
- Is that a valid word? ↩︎
- I have to admit, it was somewhat of a “proud dad” moment, watching the dialectic agent pick up the relevant files from the architecture hub, copying them to its working directory and feeding them to the debating agents. ↩︎
- Or at least a decent first draft that can be more easily validated. ↩︎
- An anti-pattern(?), but that’s a discussion for another time. ↩︎
- “Garbage in Garbage out” holds also for technical documentation. ↩︎
- That I believe is now more or less a consensus. ↩︎




