clarkie/dynogels

Does not enforce required() on nested property

digvijayy opened this issue · 6 comments

MainSchema = Joi.object().keys({
  "uid": Joi.string().guid({version:['uuidv4']}),
  "childarray": Joi.array().items(Joi.object().keys({
    "item_uid" : Joi.string().guid({version:['uuidv4']}),
    "units": Joi.object().keys({
      unitcount:Joi.number(),
      **unittype:Joi.string().required()**
    })
  }))
})

Dynogels os not enforcing required on the property unittype. Is this expected behavior or am I missing something?

Can you give an example of a data object that passes validation when it should not?

Sorry it took sometime. Here is the example, code and output. Thank you very much for all this effort.

var dynogels = require('dynogels');
var uuid = require('uuid');
const Joi = require('joi');

let schema = Joi.object().keys({
    uid: Joi.string().guid({version:['uuidv4']}),
    events : Joi.array().items(
        Joi.object().keys({
            eventName :Joi.string(),
            eventTypeId  :Joi.string().required(),
        })
    )
});

let DebuggingDBSchema = {
        hashKey : 'uid',
        // add the timestamp attributes (updatedAt, createdAt)
        timestamps : true,
        tableName : `debugger`,
        schema : schema
    }

var debugger1 = dynogels.define('debugger', DebuggingDBSchema);
var obj = {
    uid: uuid.v4(),
    events: [
        {eventName: 'event1', eventTypeId: 'myEvent'},
        {eventName: 'event2'},
    ]
}
debugger1.update(obj,function(err, data) {
            console.log(err || data) //--SUCCESS

            // Database output
            // {
            //     "events": [
            //     {
            //         "eventName": "2016-01-01",
            //         "eventTypeId": "myEvent"
            //     },
            //     {
            //         "eventName": "2016-01-01"
            //     }
            // ],
            //     "uid": "fd8bf16c-46cf-4ed2-abc8-3715b273dce3",
            //     "updatedAt": "2018-10-18T18:42:58.915Z"
            // }
    });
schema.validate(obj, function(err, data){
    console.log(err || data) //--FAILS
});

event2 should have failed as it does when I explicitly validate using Joi. But when using only dynogels it succeeds in writing to db.

Validation on updates is currently only partially working, and will be removed altogether in the next major release (see #127 and #129).

Updates are not guaranteed to have a full view of the document being stored; rather, updates describe a set of changes for the database server to perform. Without a complete view of the document after updates are applied, dynogels cannot reasonably be expected to validate it. Therefore, this feature is simply not feasible.

For example, consider this schema:

Joi.object().keys({
  id: Joi.string().required(),
  a: Joi.string(),
  b: Joi.string()
}).xor('a', 'b');

This schema requires the id field, and exactly one of a and b. It will fail if neither are present, or both are present.

Now, let's assume the database is storing the document { "id": "foo", "a": "bar" }

Then, let's issue the following update: { "id": "foo", "b": "bar" }

This will result in the database updating the stored document to: { "id": "foo", "a": "foo", "b": "foo" }

Oops, this is an invalid state -- a and b are both set! Why did dynogels allow this to happen? It's simple: dynogels didn't know that a was set.

The only way this feature can reasonably work is if dynogels always knows the exact final state of the object, which means never using update and only using item.save and model.create methods.

You can still use update, but this shifts the burden of validation to you; since dynogels can't do it, your code is responsible for ensuring that the update operation being performed results in a valid object.

Makes sense. Thank you. I think for my case I can start to use .save instead of .update.

That's what we do in our production code, too. There isn't a single update call unless we can prove that the document will be valid.

As an aside, I firmly believe that software should do something right or not do it at all. The partial validation in place for updates does catch some validation errors, but it cannot catch all of them. Because it currently catches some, it gives users of the library the false impression that it's a full validation. Then when it doesn't validate some particular edge case, we get a bug report and we have to think about whether we can actually fix that case.

In a few years we'd have an issue tracker littered with "update validation doesn't work when ___" issues, documenting all of the ways in which update validation can't work.

So I think it's a much better idea to simply not validate updates at all instead of provide the false impression that we can.

I agree with you.