Deep Dive into JSON Parsing: From Strings to Structured Data
JSON.parse() is one of the most frequently called functions in JavaScript, yet most developers never look beyond the basics. Understanding how parsing works, what it cannot handle, and how to extend it will make you better at debugging data issues and processing complex payloads.
How JSON.parse Works
JSON.parse() takes a string and produces a JavaScript value. It processes the string character by character, building up the corresponding JavaScript types:
| JSON | JavaScript |
|---|---|
"string" |
string |
123 |
number |
true / false |
boolean |
null |
null |
[...] |
Array |
{...} |
Object |
The parser is strict: any deviation from the JSON specification throws a SyntaxError. There is no partial parsing or error recovery built in.
// Basic usage
const data = JSON.parse('{"name":"Alice","age":30}');
console.log(data.name); // "Alice"
console.log(typeof data.age); // "number"
The Reviver Function
JSON.parse accepts an optional second argument: a reviver function that transforms values during parsing. This is the key to handling types that JSON does not natively support.
const json = '{"created":"2026-03-23T10:30:00Z","amount":4999}';
const data = JSON.parse(json, (key, value) => {
// Convert ISO date strings to Date objects
if (typeof value === "string" && /^\d{4}-\d{2}-\d{2}T/.test(value)) {
return new Date(value);
}
return value;
});
console.log(data.created instanceof Date); // true
console.log(data.created.getFullYear()); // 2026
The reviver is called for every key-value pair, bottom-up. The final call has an empty string as the key and the root object as the value.
Common Reviver Patterns
Date parsing:
function dateReviver(key: string, value: unknown): unknown {
if (typeof value === "string") {
const date = new Date(value);
if (!isNaN(date.getTime()) && value.includes("T")) {
return date;
}
}
return value;
}
BigInt support:
// API returns: {"id": "9007199254740993", "type": "bigint"}
function bigintReviver(key: string, value: unknown): unknown {
if (typeof value === "string" && /^\d{16,}$/.test(value)) {
return BigInt(value);
}
return value;
}
The Precision Problem
JavaScript numbers are 64-bit floating point (IEEE 754). Integers larger than Number.MAX_SAFE_INTEGER (2^53 - 1 = 9007199254740991) lose precision:
const json = '{"id": 9007199254740993}';
const data = JSON.parse(json);
console.log(data.id); // 9007199254740992 (wrong!)
The last digit changed because the number exceeds safe integer range. This is a real problem with database IDs (Snowflake IDs, Twitter IDs) that use 64-bit integers.
Solutions:
- Have the API return large integers as strings.
- Use a custom parser that preserves the raw string representation.
- Use the reviver to convert to BigInt (if the field is known).
Parsing Deeply Nested JSON
Deeply nested JSON can cause stack overflow errors in recursive parsers. While JSON.parse is implemented natively and handles significant depth, manually traversing deeply nested results can be problematic:
// Safe deep access with optional chaining
const city = data?.user?.address?.city ?? "Unknown";
// Recursive traversal with depth limit
function traverse(
obj: unknown,
callback: (key: string, value: unknown) => void,
maxDepth = 50,
depth = 0
): void {
if (depth > maxDepth) return;
if (obj && typeof obj === "object") {
for (const [key, value] of Object.entries(obj as Record<string, unknown>)) {
callback(key, value);
traverse(value, callback, maxDepth, depth + 1);
}
}
}
JSON Path: Querying Parsed Data
Once JSON is parsed, you often need to extract specific values from complex structures. JSON Path expressions provide a query language:
// Simple path resolver
function getByPath(obj: any, path: string): any {
return path
.replace(/\[(\d+)\]/g, ".$1")
.split(".")
.reduce((current, key) => current?.[key], obj);
}
const data = {
store: {
books: [
{ title: "Refactoring", price: 49.99 },
{ title: "Clean Code", price: 39.99 },
],
},
};
getByPath(data, "store.books[0].title"); // "Refactoring"
getByPath(data, "store.books[1].price"); // 39.99
Security Considerations
Parsing untrusted JSON is generally safe because JSON.parse does not execute code. However, there are indirect risks:
Prototype pollution. A malicious payload can set __proto__ properties:
{"__proto__": {"isAdmin": true}}
After parsing and merging into another object, this can pollute the prototype chain. Defend against this by filtering keys:
function safeParse(json: string): Record<string, unknown> {
return JSON.parse(json, (key, value) => {
if (key === "__proto__" || key === "constructor" || key === "prototype") {
return undefined; // Strip dangerous keys
}
return value;
});
}
Denial of service. Extremely large JSON payloads can exhaust memory. Always validate payload size before parsing:
const MAX_JSON_SIZE = 10 * 1024 * 1024; // 10 MB
function parseWithLimit(json: string): unknown {
if (json.length > MAX_JSON_SIZE) {
throw new Error(`JSON payload too large: ${json.length} bytes`);
}
return JSON.parse(json);
}
JSON.stringify: The Other Direction
Serialization has its own nuances. JSON.stringify silently drops undefined, functions, and symbols:
JSON.stringify({ a: undefined, b: () => {}, c: Symbol() });
// "{}"
Use a replacer function to handle special types:
const data = { id: BigInt(9007199254740993), name: "test" };
const json = JSON.stringify(data, (key, value) =>
typeof value === "bigint" ? value.toString() : value
);
// '{"id":"9007199254740993","name":"test"}'
Pretty Printing
The third argument to JSON.stringify controls indentation:
JSON.stringify(data, null, 2); // 2-space indent
JSON.stringify(data, null, "\t"); // tab indent
This is essential for debugging but should not be used in production APIs where payload size matters. A 2-space indented JSON file can be 30-40% larger than its minified equivalent.
Try our JSON Parser to parse, explore, and query JSON data instantly — right in your browser, no upload required.