iKit
Tutorial · 10 min read ·

How to Convert SQL Dumps to Excel in Under 30 Seconds

Parse MySQL, PostgreSQL, or SQLite dumps and export them to .xlsx entirely in your browser — no server, no account, no size cap. Full walkthrough.

How to Convert SQL Dumps to Excel in Under 30 Seconds

How to Convert SQL Dumps to Excel in Under 30 Seconds

Opening a 40 MB .sql file in a text editor to "just take a look" is a bad time. The dump scrolls for 200,000 lines, your editor freezes on syntax highlighting, and you still can't tell which table has the row you're looking for. What you actually want is a spreadsheet — one tab per table, one row per record, filters across the top. Here's how to get there in about 30 seconds using a browser tool that parses the dump locally, then writes a real .xlsx file.

Why SQL dumps are painful to read directly

A mysqldump or pg_dump file is optimised for one job: reloading the database. It is not optimised for humans. Every row is wrapped in an INSERT statement, strings are escaped with backslashes, binary blobs are hex-encoded, and 100,000 rows of the same table are packed into a single INSERT ... VALUES (...), (...), (...); statement with no line breaks. Grep works, but only if you know exactly what you're looking for.

The raw format problem

Consider a typical dump snippet:

INSERT INTO `users` (`id`,`email`,`status`,`created_at`) VALUES
(1,'[email protected]','active','2024-11-02 09:14:31'),
(2,'[email protected]','active','2024-11-02 09:15:02'),
(3,'[email protected]','pending','2024-11-02 09:15:48');

That is easy. Now add a JSON column with embedded quotes, a TEXT field with newlines, a NULL where you expected a value, and a character set that quotes identifiers with backticks instead of double quotes. The human brain starts reading the dump as a wall of commas.

Why spreadsheets win

Loading the same data into Excel or Google Sheets gives you three things for free: column headers you can freeze, per-column filters, and the ability to sort. For audits, rollback checks, data handovers, or just confirming that a migration shipped the rows it was supposed to ship, that is what you want. You don't need a full database engine — you need to see the rows.

The 30-second conversion workflow

The iKit SQL Converter takes the dump, parses it, and writes an Excel file locally. Three steps.

1. Paste or drop your .sql file

Open the tool and either drag your .sql file onto the editor or paste the contents directly. For pasting, Cmd/Ctrl+V works from any source. The right panel shows live syntax highlighting so you can spot a malformed statement before you waste time parsing the rest.

There is no upload. You can confirm this yourself by opening DevTools → Network, clicking Convert, and watching for XHR traffic. You won't see any — everything is JavaScript running against your DOM.

2. Pick the dialect and options

A small options bar sits above the input:

Option What it does When to change it
Dialect Switches the parser between MySQL, PostgreSQL, and SQLite Match the tool that produced the dump
Include CREATE TABLE Adds a header row with column types in a separate sheet Helpful when sharing schema with non-DBAs
Numeric type guessing Detects 123 vs "123" and writes cells as number/text accordingly Turn off if your IDs are numeric but should stay as strings
Split by table One sheet per table vs. everything on one Keep on — it's almost always what you want
Max rows per sheet Caps each sheet (Excel's hard ceiling is 1,048,576 rows) Only relevant for dumps with huge fact tables

The defaults are fine for 95 percent of real dumps.

3. Click Convert, then Download

A progress bar runs through three phases: tokenise, parse, write. On a 2023-era laptop, a 20 MB dump finishes in under 15 seconds and produces a fully formatted .xlsx with one tab per table. Download, open in Excel or Numbers, done.

If you're on a slow machine, the progress counter updates per-statement rather than per-row, so you'll see it tick even on a dump with one giant INSERT.

What gets parsed (and what gets skipped)

The parser is deliberately conservative: it handles the statements people actually care about in a data export and silently skips the ones that don't map to a spreadsheet cell.

Supported statements

  • CREATE TABLE — used to name sheets and pick column order
  • INSERT INTO ... VALUES (...) — the actual rows
  • INSERT INTO ... (col1, col2) VALUES (...) — with explicit column list
  • Multi-row INSERT — extended format from mysqldump --extended-insert
  • COPY FROM stdin (PostgreSQL's pg_dump default format)

Statements the parser ignores

  • DROP TABLE, CREATE INDEX, CREATE TRIGGER — no data to extract
  • SET NAMES, SET FOREIGN_KEY_CHECKS — environment setup
  • BEGIN/COMMIT/ROLLBACK — transaction markers
  • Comments (-- single-line and /* ... */ block)

This is the right trade-off. You do not want your spreadsheet export to fail because CREATE TRIGGER in the dump contains a semicolon inside a string literal.

Edge cases the parser handles correctly

  • Escaped quotes inside strings. 'It\'s fine' and 'It''s fine' both resolve to the same UTF-8 string in the cell.
  • Binary blobs. Any 0xDEADBEEF or X'DEADBEEF' literal is converted to its hex representation in a text cell — spreadsheets cannot store raw bytes.
  • NULL vs empty string. NULL becomes an empty cell with no value (differs from "" which stays as an empty string).
  • Dates and timestamps. Standard ISO 8601 formats are written as Excel date values; non-standard ones stay as text so you don't lose precision.
  • Unicode. Japanese, Arabic, and emoji in column values round-trip without corruption because .xlsx is UTF-8 throughout.

Real scenarios where this saves time

Auditing a backup

Ops hands you backup-2025-03-12.sql and asks, "Did the migration include the new subscriptions table?" Loading a 60 MB file into a full database just to run SELECT COUNT(*) is overkill. Convert to Excel, check the bottom of the subscriptions sheet, report back. Two minutes.

Handover to non-technical teammates

Finance wants last quarter's order rows. You don't want to give them read access to production, and you don't want to babysit a CSV export that mangles the UTF-8 currency symbols. Run mysqldump --no-create-info --tables orders > orders.sql, hand them the .sql file and this tool's URL, and they can open the result in Excel themselves.

Schema diff during a refactor

You're renaming a column across a monorepo. A quick sanity check: dump the old staging database, dump the new one, convert both to Excel, diff the column headers of the relevant sheet. Faster than writing a migration test.

Quick rollback check

After a risky deploy, the Slack message says "rolled back cleanly". Pull the latest backup, convert, open the affected table, sort by updated_at descending. If the last five rows have yesterday's timestamp, the rollback worked.

Alternatives compared

There are other ways to get SQL rows into a spreadsheet. They all have trade-offs.

Method Speed Privacy Setup required Handles 100 MB dump
iKit SQL Converter Fastest Client-side only None Yes
Restore to local MySQL → export CSV → open in Excel Slowest Private but complex MySQL + client Yes
mysql ... -e "SELECT * FROM users" > rows.csv Medium Requires DB access MySQL client Yes
Online "SQL to Excel" converters with upload Fast Your data on their server None Usually capped
Python script with pandas Depends on you Private Python environment Yes

The CSV-via-client route is the traditional answer and works well if you have credentials to a live database. If you only have the dump file — a backup, a handover, a file someone emailed you — the browser tool avoids spinning up a database just to read rows.

Under the hood: what the parser actually does

Most online "SQL to CSV" services fail on one of three things: backslash-escaped quotes, multi-row INSERT, and multibyte characters. A robust parser has to tokenise first, then build a statement tree, then walk the tree — not regex the raw text.

A simplified tokeniser state machine for string literals:

// Pseudocode: tokenise a SQL string literal
function readString(src, i, quote) {
  const out = [];
  i++; // skip opening quote
  while (i < src.length) {
    const c = src[i];
    if (c === '\\' && src[i + 1] != null) {
      out.push(unescape(src[i + 1]));
      i += 2;
      continue;
    }
    if (c === quote) {
      // SQL-style doubled quote: 'it''s'
      if (src[i + 1] === quote) { out.push(quote); i += 2; continue; }
      return { value: out.join(''), next: i + 1 };
    }
    out.push(c);
    i++;
  }
  throw new Error('Unterminated string literal');
}

That one function handles the difference between 'don\'t' (MySQL) and 'don''t' (ANSI SQL and PostgreSQL). Skipping it is why so many converters produce spreadsheets with orphan backslashes.

For the Excel side, the tool uses the OOXML SpreadsheetML format directly — .xlsx is a ZIP of XML parts — rather than a compiled binary library. The footprint stays small, the output opens cleanly in Excel, Numbers, Google Sheets, and LibreOffice Calc.

Common pitfalls when converting SQL dumps

Even with a solid parser, a dump file can still produce surprising output. Five things to check.

1. Character set mismatches

If your dump starts with SET NAMES latin1 but the actual data was stored as UTF-8, rows with accented characters will look garbled. Fix: re-dump with mysqldump --default-character-set=utf8mb4.

2. Numbers that look like IDs

A column of user IDs like 0001234567 is technically a number, but if the leading zeros matter (national IDs, phone numbers, postal codes), disable numeric guessing. Otherwise, Excel silently drops them.

3. Very large text columns

Excel's single-cell limit is 32,767 characters. TEXT or LONGTEXT columns exceeding that will be truncated with an ellipsis. The tool adds a warning row at the top of the sheet when this happens so you don't miss it.

4. Foreign keys across sheets

Once rows live on separate tabs, foreign-key relationships are just numbers. Keep the original .sql file if you'll need to reconstruct joins — Excel can't do a JOIN across sheets without a PivotTable or Power Query.

5. Multi-gigabyte dumps

A 2 GB dump won't fit in browser memory. The practical ceiling is roughly 100 MB for Chrome and 70 MB for Safari. Above that, split the file first:

# Split a large dump into 50 MB chunks
split -b 50m dump.sql dump.part.

Each dump.part.aa, dump.part.ab, etc. can then be converted individually. Just remember that the first chunk contains the CREATE TABLE headers and the rest contain only rows.

Related on iKit

Related posts