If you notice something like "alteraitions>" or other XML-like tag fragments appearing at the end of a model’s text output, this is almost certainly not intentional model content. Instead, it’s typically a sign that part of the model’s internal metadata markup is leaking through to the user-facing text.

Important: This issue is often temporal and intermittent — meaning it does not appear in every response and can be difficult to reproduce consistently. The underlying cause is usually tied to specific runtime conditions (e.g., streaming timing, network latency, partial flushes) rather than a deterministic pattern in the model itself.


1. Hidden Metadata Leak (Most Common Cause)

Some LLMs internally annotate responses with XML-like sections to track metadata, processing notes, or modifications to the prompt/response.
For example, you might never normally see something like:

<alterations>...</alterations>

…but a bug or misconfiguration in your integration can cause the raw closing tag (e.g., </alterations> or a corrupted form like alteraitions>) to appear in the visible text.


2. Streaming Boundary Artifact

If you’re streaming output from the model, a partial flush of tokens or chunk boundary issue may expose internal tags before your rendering logic strips them out. This can happen if:


3. Integration or SDK Schema Mismatch

If your SDK or API client is outdated or not aligned with the model’s current response schema:


How to Report It

  1. In your browser, open Developer Tools (name may vary depending on your browser), go to the Network tab, and select Response.

  2. Find the specific session ID in the Response and include the trace info in a report — it should look something like:


How to Prevent It

  1. Update your SDK/client

  2. Filter non-text event types in streaming

  3. Avoid dumping raw API objects to the UI

  4. Optional: Add a stop sequence as a guard


Note: "alteraitions>" is just one example of a corrupted closing tag. Similar artifacts may appear with other tag names depending on the model’s internal markup conventions. The key is to treat them as integration/streaming cleanup issues, not as part of the model’s intended response.

Jira Ref.: