Ellie Sager
Posted on July 28, 2024
For the impatient - here the link to my code in GitHub.
For the rest - keep reading.
Library Used: sqlx-postgres
The sqlx library is a popular way to connect to a relational db and, despite what multiple examples might make you believe, the current version (0.8.0 as of this writing) handles Uuid
s just fine.
Uuid vs i64
I was looking for a good example on connecting Rust to Postgres DB. There was one issue was not adequately addressed there - inserting data into the database where id in Rust is a Uuid
(uuid
in db), not an i64
(bigserial
in db).
For example, this is my simple struct:
#[derive(Debug, FromRow)]
pub struct Message {
pub id: Uuid, // <-- I am too cool for i64, I want my id as Uuid!
pub content: String,
}
I tried to follow the same steps as structs that have use i64
for id, but rust compiler was not happy of course.
After reading some discussions online and a little experimenting, I was able to get what I wanted and turns out it is not that hard!
Sharing it for the benefit of others ;)
Create A Table
There are better ways to create a table than using rust code, but since many examples show how to do it (and usually have id as i64
), I decided to do it too, but with id being a Uuid
:P
Here is how we can create a new table if our id is an i64:
async fn init_messages_table(pool: &Pool<Postgres>) -> () {
sqlx::query(
r#"
CREATE TABLE IF NOT EXISTS messages (
id bigserial,
content text
);"#,
)
.execute(pool)
.await
.expect("postgres messages_table creation error");
}
And here is how we can do the same if our id is a Uuid
:
async fn init_messages_table(pool: &Pool<Postgres>) -> () {
sqlx::query(
r#"
CREATE TABLE IF NOT EXISTS messages (
id uuid,
content text
);"#,
)
.execute(pool)
.await
.expect("postgres messages_table creation error");
}
All you have to do is declare your id this way: id uuid
as opposed to id bigserial
. Neat!
Create/Insert Data
This is the part you most likely want to see.
Here is how we can insert a new row if our id is an i64
:
let row: (i64,) =
sqlx::query_as("insert into messages (content) values ($1) returning id")
.bind(req.content.to_owned())
.fetch_one(&data.db_pool)
.await
.expect("postgres insertion error");
For the case were id is a Uuid
, we do this instead:
let id = sqlx::types::Uuid::from_u128(uuid::Uuid::new_v4().as_u128());
let row: (sqlx::types::Uuid,) =
sqlx::query_as("insert into messages (id, content) values ($1, $2) returning id")
.bind(id.to_owned())
.bind(req.content.to_owned())
.fetch_one(&data.db_pool)
.await
.expect("postgres insertion error");
Pay attention to these two main differences:
The return type of the row is
(sqlx::types::Uuid,)
instead of(i64,)
- but this is what you'd expect.The id of the data comes from Rust - this is where we create the
Uuid
as opposed to the scenario where database computes id of the data by incrementing the id of the previous row.
Since our database doesn't know the value of id, we have to pass it from Rust. And so, we provide and bind an extra piece of information - theUuid
we generated.
Note
I saw some older code online that added AS "id: Uuid" to the query:
sqlx::query_as("insert into messages (id, content) values ($1, $2) returning id AS \"id: Uuid\"")
but turns out, it is not necessary with the current sqlx
version.
Read Data
Surprise! When it comes to reading data from database, I didn't have to make any adjustments:
let select_query = sqlx::query_as::<_, Message>("SELECT id, content FROM messages");
let messages: Vec<Message> = select_query
.fetch_all(&data.db_pool)
.await
.expect("postgres selection error");
The magic of the sqlx library seems to be able to sort out the type of our id
!
A Moment Of Gratitude
Thank you, whoever stumbled upon this post and looked at it 🤎
I am just starting out, so consider leaving me a feedback.
Are you a fellow Rustacean?
Was this post helpful to you?
What other Rust-related issues would you like to be addressed as a Rusty Recipe?
Reminder - the code for this post lives in my GitHub.
Posted on July 28, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.