Overview
Data import is a critical step for teams evaluating or adopting SurrealDB - whether migrating from a legacy database, consolidating systems, or starting fresh with existing datasets.
SurrealDB supports flexible import patterns and offers multiple ways to ingest structured, document, graph, and time-series data.
This article provides a high-level overview of how teams approach data import and phased migration, and what tools are available to support the process.
Import methods
SurrealDB supports multiple ways to bring data into your database:
Manual import via SurrealQL
Teams can start quickly using simple SurrealQL INSERT
, UPDATE
, or RELATE
queries with JSON payloads - great for loading sample data or onboarding early-stage projects.
Programmatic imports via client SDKs
For more control, developers use SurrealDB’s official client libraries (JavaScript, Go, Python, Rust) to build custom import scripts from files, APIs, or existing databases.
ETL pipelines
SurrealDB integrates into ETL and ELT pipelines via its official Fivetran connector, currently available in private preview - enabling automated data sync from supported sources into SurrealDB Cloud and self-hosted instances.
Official connector support is evolving.
Migration via APIs
When source databases provide REST or GraphQL APIs, teams can use SurrealDB’s REST API and SurrealQL to transform and ingest data into SurrealDB - after extracting it from the source via custom scripts or ETL tools.
Common migration scenarios
CSV, JSON, or flat files
Teams often start with CSV or JSON exports from legacy systems and load them directly via scripted inserts.
Relational → SurrealDB
Tables and foreign keys become documents and graph edges - often simplifying schema and reducing the need for complex joins.
NoSQL → SurrealDB
JSON documents can be imported with minimal transformation, while relationships can be enriched using SurrealDB’s graph model.
Graph DB → SurrealDB
Node and edge structures can be recreated natively using a RELATE statement - SurrealQL makes modelling relationships and traversals straightforward.
Migration best practices
Start small, grow incrementally: Many teams begin with a subset of data or a single microservice to validate the approach.
Use namespaces or separate databases: Useful when running SurrealDB side-by-side with existing systems before full cutover.
Re-evaluate schema: SurrealDB’s flexible modelling approach may allow for simplifications compared to your source database’s structure.
Consider live migration / dual writes: During phased cutovers, many teams adopt dual-write patterns, sending live app traffic to both the source DB and SurrealDB until migration is complete.
Explore more
Data ingestion with the Fivetran connector