Unix timestamp ↔ human-readable date/time
Current Unix Timestamp
—
A Unix timestamp is the number of seconds (or milliseconds) elapsed since 1970-01-01 00:00:00 UTC. It's the lingua franca of timestamps in databases, log files, APIs, and programming languages — easy to compare, easy to sort, time-zone agnostic.
This tool converts Unix timestamps to human-readable date/time and back, in both seconds and milliseconds. All conversion happens in your browser using the native Date API.
1700000000Local: 2023-11-15 06:13:20
UTC: 2023-11-14 22:13:20
ISO: 2023-11-14T22:13:20.000Z1700000000000Local: 2023-11-15 06:13:20.000
UTC: 2023-11-14 22:13:20.000
ISO: 2023-11-14T22:13:20.000ZBy length. Today's timestamps are around 10 digits in seconds and 13 digits in milliseconds. If your number has 10 digits, it's seconds; 13 digits, it's milliseconds. JavaScript Date uses milliseconds; most Unix CLIs and Linux APIs use seconds.
On 2038-01-19 03:14:07 UTC, a 32-bit signed integer used to store seconds-since-epoch overflows. Modern 64-bit systems and JavaScript (which uses double-precision floats) aren't affected, but legacy embedded systems and old database columns may need migration.
Unix timestamps are always UTC by definition — no time zone is encoded. The 'Local' column above is rendered using your browser's time zone for convenience. The actual instant is the same everywhere.
Yes — values before 1970 are negative. -86400 means one day before the epoch (1969-12-31 UTC). Useful for historical date math.
Almost always a time-zone misunderstanding. Check your system clock and time zone setting. The ISO and UTC columns are the source of truth — Local is just your browser's interpretation.