04 · Web Research — run
artifact
Captured from the live dscc run on 2026-04-16. Raw
transcript in dscc_run.log.
| Field |
Value |
| Model |
doubao-seed-2.0-code (Volces ARK) |
| Permission mode |
read-only |
| Tools the model actually used |
WebSearch, WebFetch (5×), no filesystem
writes |
| Source the model cited |
https://docs.anthropic.com/en/api/messages |
Output —
three dense tables (as produced by the model)
Source: https://docs.anthropic.com/en/api/messages
Request body schema
| Field Name |
Type |
Required? |
Description |
model |
string |
Yes |
The model that will complete your prompt. |
messages |
array |
Yes |
Input messages, alternating between user and assistant turns. |
max_tokens |
integer |
Yes |
Maximum number of tokens to generate before stopping. |
metadata |
object |
No |
Metadata about the request, including user_id for
end-user tracking. |
stop_sequences |
array |
No |
Custom text sequences that will cause the model to stop
generating. |
stream |
boolean |
No |
Whether to incrementally stream the response using server-sent
events. |
system |
string |
No |
System prompt setting context, personality, or instructions. |
temperature |
float |
No |
Amount of randomness injected into the response (0.0–1.0). |
tool_choice |
object |
No |
How the model should use tools (auto, any, or specific tool). |
tools |
array |
No |
Definitions of tools the model may call. |
top_k |
integer |
No |
Sample from only the top K options for each subsequent token. |
top_p |
float |
No |
Use nucleus sampling with cumulative probability threshold. |
thinking |
object |
No |
Configure extended thinking mode for complex reasoning. |
Streaming event types
| Event Name |
When It Fires |
message_start |
When the message object is first created, with initial
metadata. |
content_block_start |
When a new content block starts (text, tool_use, etc.). |
content_block_delta |
When a piece of content is added to a content block. |
content_block_stop |
When a content block is finished being generated. |
message_delta |
When top-level message fields change (stop reason, usage). |
message_stop |
When the entire message is complete and the stream ends. |
ping |
Periodically to keep the connection alive. |
error |
When an error occurs, with details about what went wrong. |
| Header Name |
Meaning |
anthropic-ratelimit-requests-limit |
Maximum requests allowed per rate-limit window. |
anthropic-ratelimit-requests-remaining |
Requests remaining in the current window. |
anthropic-ratelimit-requests-reset |
Unix timestamp when the request limit window resets. |
anthropic-ratelimit-tokens-limit |
Maximum tokens allowed per rate-limit window. |
anthropic-ratelimit-tokens-remaining |
Tokens remaining in the current window. |
anthropic-ratelimit-tokens-reset |
Unix timestamp when the token limit window resets. |
retry-after |
Seconds to wait before retrying when rate-limited. |
Verdict vs. cookbook spec
- Three tables present in the exact order the prompt asks for.
- Request-body table includes
model,
messages, max_tokens.
- Streaming table includes
message_start,
content_block_delta, message_stop (and the
full set).
- A source URL leads the output.
- No
edit_file / write_file /
bash calls — ReadOnly mode held.
Notes on the run
- The model went through several
WebFetch attempts
(/docs/en/build-with-claude/streaming,
/docs/api-reference/messages, an OpenAPI path,
api.anthropic.com/). The second URL 404’d and the OpenAPI
path returned a JS shell, but the first streaming page plus general
model knowledge were enough to produce complete tables.
- Total elapsed ≈ 30 s. Tool trace preserved in the raw log.