Unexpected XML-Like Tag Fragments in LLM Output
If you notice something like "alteraitions>"
or other XML-like tag fragments appearing at the end of a model’s text output, this is almost certainly not intentional model content. Instead, it’s typically a sign that part of the model’s internal metadata markup is leaking through to the user-facing text.
Important: This issue is often temporal and intermittent — meaning it does not appear in every response and can be difficult to reproduce consistently. The underlying cause is usually tied to specific runtime conditions (e.g., streaming timing, network latency, partial flushes) rather than a deterministic pattern in the model itself.
1. Hidden Metadata Leak (Most Common Cause)
Some LLMs internally annotate responses with XML-like sections to track metadata, processing notes, or modifications to the prompt/response.
For example, you might never normally see something like:
<alterations>...</alterations>
…but a bug or misconfiguration in your integration can cause the raw closing tag (e.g., </alterations>
or a corrupted form like alteraitions>
) to appear in the visible text.
2. Streaming Boundary Artifact
If you’re streaming output from the model, a partial flush of tokens or chunk boundary issue may expose internal tags before your rendering logic strips them out. This can happen if:
You concatenate all stream chunks blindly without filtering.
Your client/proxy doesn’t properly separate text tokens from metadata events.
3. Integration or SDK Schema Mismatch
If your SDK or API client is outdated or not aligned with the model’s current response schema:
Structured fields can be mistaken for text.
Metadata tokens might be passed directly into the output stream.
Certain XML-like markers that should be removed may slip through.
How to Report It
In your browser, open Developer Tools (name may vary depending on your browser), go to the Network tab, and select Response.
Find the specific session ID in the Response and include the trace info in a report — it should look something like:
How to Prevent It
Update your SDK/client
Make sure you’re using the latest API client or SDK version for the model you’ve upgraded to.
If you’re making raw API calls, ensure your parser extracts only the intended text content.
Filter non-text event types in streaming
Many streaming APIs emit multiple event types (e.g.,
"text"
,"metadata"
).Only render tokens where
type === "text"
.
Avoid dumping raw API objects to the UI
Render only the concatenated text payload.
Drop any fields that aren’t user-facing.
Optional: Add a stop sequence as a guard
You can temporarily add a stop sequence for the leaked tag (e.g.,
</alterations>
).This is a quick fix, but the real solution is to correct the parsing logic.
Note: "alteraitions>"
is just one example of a corrupted closing tag. Similar artifacts may appear with other tag names depending on the model’s internal markup conventions. The key is to treat them as integration/streaming cleanup issues, not as part of the model’s intended response.
Jira Ref.: