Turso is an edge-hosted, distributed database based on libSQL, an open-source and open-contribution fork of SQLite. It was designed to minimize query latency for applications where queries come from anywhere in the world. In particular, it works well with edge functions provided by cloud platforms such as CloudFlare, Netlify, and Vercel, by putting your data geographically close to the code that accesses it.
In order to better understand how Turso works, please read through the following concepts that are used throughout this documentation.
Turso databases are deployed using Fly.io, which allows Turso to host database instances in 26 locations around the world, each identified with a three letter code. When creating a database with replicas, you should consider which locations best support the code running any queries. In general, the physical distance between the code and the database determines the latency, so it’s recommended to benchmark your location options for better performance.
When you create a Turso database with the Turso CLI, it will automatically choose a location based on the physical location of the machine where you run the
turso db create command. The default can be overridden on the command line.
A logical database is a collection of libSQL instances with one primary and zero or more replicas. Running the
turso db create command creates a new logical database with a primary instance.
A database instance is an installation of libSQL running on a single machine that is part of a logical database. All instances contain data related only to that database, and automatically participate in replication of that data between the instances. There are two types of instances: primary and replica.
The primary instance of a logical database is the main source of data for the database. Once allocated to a location, it cannot be moved. All changes to the database are handled by the primary. Client applications may connect directly to the primary for read and write operations.
A replica of a logical database contains a copy of the data from the primary and is kept in sync as changes are made over time. Client applications may connect directly to a replica for read and write operations, but any writes are automatically forwarded to the primary. As such, read operations have minimized latency, but write operations must make another hop to the primary. The primary will then push that change to all replicas. The replicas provide snapshot isolation for read transactions.