PersistX/PersistDB

Why does Insert use ValueSet instead of directly using the Model object?

Closed this issue · 4 comments

I wonder what is the reasoning behind using ValueSet for Insert which opens the door to forget to provide a required field which would cause a run time exception that could be a compile error if the model object was used directly.

I ask because the Readme says

Type-safety: Compile-time errors prevent runtime errors

So instead of doing

struct Task {
    public let id: UUID
    public let createdAt: Date
    public var text: String
    public var url: URL?
    
    public static func newTask(text: String, url: URL? = nil) -> Insert<Task> {
        return Insert([
            \Task.id == .uuid(),
            \Task.createdAt == .now,
            \Task.text == text,
            \Task.url == url,
        ])
    }
}

store.insert(Task.newTask(text: "Ship!!!"))

do

struct Task {
    public let id: UUID
    public let createdAt: Date
    public var text: String
    public var url: URL?
    
    public static func newTask(text: String, url: URL? = nil) -> Insert<Task> {
        return Insert(
            Task(
                id: .uuid(),
                createdAt: .now,
                text: text,
                url: url
            )
        )
    }
}

store.insert(Task.newTask(text: "Ship!!!"))
mdiep commented

PersistDB doesn't require you to instantiate any model objects because some may be impossible to instantiate. For Task this would definitely be possible. But consider this model:

final class Model1 {
  var id: Int
  var model2: Model2
}

final class Model2 {
  var id: Int
  var model1: Model1 // back-pointer from `Model1.model2`
}

(Forgive the generic name. I don't have time to find a compelling example of a 1:1 relationship.)

In this case, you can never instantiate Model1 or Model2 because they depend on each other.

So 1:1 relationships rule out instantiation.

But, Insert could pull the same trick as Schema and Projection by piggybacking off of the Model1.init. The downside there is that you lose the names on all the parameters.

struct Task {
    public let id: UUID
    public let createdAt: Date
    public var text: String
    public var url: URL?
    
    public static func newTask(text: String, url: URL? = nil) -> Insert<Task> {
        return Insert(
            Task.init,
            .uuid(),
            .now,
            text,
            url
        )
    }
}

That's why I chose to use ValueSet.

And while it unfortunately doesn't provide you with compile-time errors when you've done something wrong, it should always give you a runtime error. I'm assuming that most people won't have branching to build up their Insert—they're going to pass in an array literal of assignments. So you should get your runtime error as soon as you try to use that code; any test would also prove that you've specified all the required fields. I decided this guarantee was enough.

Thanks for the answer!

And while it unfortunately doesn't provide you with compile-time errors when you've done something wrong, it should always give you a runtime error. I'm assuming that most people won't have branching to build up their Insert—they're going to pass in an array literal of assignments. So you should get your runtime error as soon as you try to use that code; any test would also prove that you've specified all the required fields. I decided this guarantee was enough.

I agree that the trade off seems more than reasonable. Any "real-world" code that forgets to provide a required property will fail as soon as the insertion gets executed so any basic test should find the issue. Although it would be really cool to be able to check this at compile time.

But, Insert could pull the same trick as Schema and Projection by piggybacking off of the Model1.init. The downside there is that you lose the names on all the parameters.

Just to be sure that I understood you answer correctly, Do you propose that as a way of solving the 1:1 relationship issue? If so how does it solve it?

final class Person {

  var firstName: String
  var lastName: String
  var birthDate: Date
  var car: Car

}

final class Car {

  var brand: String
  var model: String
  // sourcery: reference_through: car
  var owner: Person

}

Then you could have a template that iterates over all types that conform to PersistDB.Model and generate something like

extension Person {

  struct InsertionSet {
  
      let firstName: String
      let lastName: String
      let birthDate: Date
      let car: Car.InsertionSet

  }

}

extension Car {

  struct InsertionSet {

      let brand: String
      let model: String

  }

}

Something like this would make the API type safe at the cost of introducing Sourcery as a development dependency and a small increment in source code size and binary size. I don't know how auto-incrementable fields generated by the DB are handled with PersistDB but something like this could also help with those fields that are present for all persisted models but should not be provided at insertion time.

mdiep commented

Just to be sure that I understood you answer correctly, Do you propose that as a way of solving the 1:1 relationship issue? If so how does it solve it?

It could work just like how 1-to-many relationships can currently be defined.

struct Book {
    let id: Int
    let title: String
    let author: Author
}

extension Book: PersistDB.Model {
    static let schema = Schema(
        Book.init,
        \.id ~ "id",
        \.title ~ "title",
        \.author ~ "author"
    )
}

struct Author {
    let id: Int
    let name: String
    let books: Set<Book>
}

extension Author: PersistDB.Model {
    static let schema = Schema(
        Author.init,
        \.id ~ "id",
        \.name ~ "name",
        \.books ~ \Book.author // <---- This defines the back pointer for 1:many
    )
}

For the table that contains the actual column, you'd specify a name; for the back pointer, you'd specify the other key path.

Something like this would make the API type safe at the cost of introducing Sourcery as a development dependency and a small increment in source code size and binary size.

I really want to avoid something like Sourcery. It's a neat project, but I'm not a fan of generating source code.

I don't know how auto-incrementable fields generated by the DB are handled with PersistDB but something like this could also help with those fields that are present for all persisted models but should not be provided at insertion time.

They're not currently, but the plan is to not specify them in the Insert. Then the DB can create them and PersistDB can pass it back from the insert.

mdiep commented

Gonna close this, but feel free to reopen or create a new issue if you have more questions.