Skip to main content
The parse_json function interprets a string as JSON and returns the value as a dynamic object. Use this function to extract structured data from JSON-formatted log entries, API responses, or configuration values stored as JSON strings.

For users of other query languages

If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you use spath to parse JSON. APL’s parse_json provides similar functionality with dynamic object support.
| spath input=json_field
| rename "field.name" as extracted_value
In ANSI SQL, JSON parsing varies by database with different functions. APL’s parse_json provides standardized JSON parsing.
SELECT JSON_EXTRACT(json_field, '$.field.name') AS extracted_value FROM logs;

Usage

Syntax

parse_json(json_string)

Parameters

NameTypeRequiredDescription
json_stringstringYesA string containing valid JSON to parse.

Returns

Returns a dynamic object representing the parsed JSON. If the JSON is invalid, returns the original string.

Use case examples

Parse JSON-formatted log messages to extract specific fields for analysis.Query
['sample-http-logs']
| extend json_data = parse_json('{"response_time": 145, "cache_hit": true, "endpoint": "/api/users"}')
| extend response_time = toint(json_data.response_time)
| extend cache_hit = tobool(json_data.cache_hit)
| extend endpoint = tostring(json_data.endpoint)
| project _time, response_time, cache_hit, endpoint, status
| limit 10
Run in PlaygroundOutput
_timeresponse_timecache_hitendpointstatus
2024-11-06T10:00:00Z145true/api/users200
2024-11-06T10:01:00Z145true/api/users200
This query parses JSON-formatted metadata from logs to extract performance metrics like response time and cache hit status.

Best practices

When working with JSON data in Axiom, consider the following best practices:
  • Prefer structured ingestion over runtime parsing: If possible, structure your JSON data as separate fields during ingestion rather than storing it as a stringified JSON object. This provides better query performance and enables indexing on nested fields.
  • Use map fields for nested data: For nested or unpredictable JSON structures, consider using map fields instead of stringified JSON. Map fields allow you to query nested properties directly without using parse_json at query time.
  • Avoid mixed types: When logging JSON data, ensure consistent field types across events. Mixed types (for example, sometimes a string, sometimes a number) can cause query issues. Use type conversion functions like toint or tostring when necessary.
  • Performance considerations: Using parse_json at query time adds CPU overhead. For frequently queried JSON data, consider parsing during ingestion or using map fields for better performance.
  • parse_url: Parses URLs into components. Use this specifically for URL parsing rather than general JSON.
  • parse_csv: Parses CSV strings. Use this for comma-separated values rather than JSON.
  • todynamic: Alias for parse_json. Use either name based on your preference.
  • gettype: Returns the type of a value. Use this to check the types of parsed JSON fields.