Fight the model jungle
Opened this issue · 2 comments
The control flow of saving, fetching and destroying models is currently a total mess. Also the model is getting bigger and bigger. We need to split tasks and rework it.
First sketch
Explanations:
- read/readCollection/create/update/destroy are called io-event/methods
- later there may be createCollection/updateCollection/destroyCollection for better io performance
Model
The model represents a collection of attributes that can be watched and changed. Thus the model can also be used to save specific states in views (without being something that must be serialized).
- provides setters and getters for attributes
- is able to revert uncommited changes
- doesn't know about model urls and ids
- unaware of the environment (works exactly the same on the client and on the server)
- fires change-events
- it should also be possible to listen on specific attribute changes (like
on("change:myName", ...)
Resource
The resource is divided in a static- and instance-part.
Resource (static)
- represents a specific type of data that can be serialized
- thus we need an url that identifies the type over the network
- the static Resource is able to validate a serialization of a resource instance
- provides methods like
find()
andfindById()
- knows how to translate a resource instance back and forth into a JSON representation
Resource (instance)
extends Model
- represents a bunch of actual data that can be saved, fetched and destroyed
- thus the resource should be able to provide a resource url that identifies an instance of the resource uniquely
- to create that url the instance needs a collection of all related ids
- an instance is basically a model with additional methods like
save()
,fetch()
,destroy()
andvalidate()
- the JSON presentation should contain all data and information to identify and process the resource
- on save/fetch/destroy it just emits the related io-event on the api
Api (singleton)
extends EventEmitter
- an io-event can be understood as a request to do read/write with a resource's serialization
- informs its listeners when someone requested to read/readCollection/create/update/destroy/validate an instance of a specific resource. Therefore it passes the serialization of the instance, not the instance itself to its listeners
- an io-request may contain any arbitrary data (e.g. for authorization)
- thus an http/websocket request just emits a specific event that a resource's serialization should be stored persistently
Service (singleton)
The service is responsible to read or write resources into a persistent data storage.
- provides io-methods for serializations of resources
- therefore it needs the ids and attributes (and maybe params)
- listens typically on the Api for io-events of a specific resource and reacts
Example
// services/post/CommentService.server.class.js
var CommentService = Service.extend("CommentService", {
read: function (req, res) { ... },
readCollection: function (req, res) { ... },
create: function (req, res) { ... },
update: function (req, res) { ... },
destroy: function (req, res) { ... }
});
// registers listeners for io-events of that url on the api
CommentService.watch("blog/comment");
// remove listeners
CommentService.unwatch();
// client.init.js
// watch on Api for specific requests and start a http/websocket request
RemoteService.watch("blog/comment");
Questions
- is the resourceCache (former modelCache) part of the static Resource or is it a special service? If it's a service, the resourceCache needs the instance, not its serialization
- who is responsible for creating instances of resources? There is also the tricky part with sub resources
- how does validation work?
- are there server/client schemas and how do they work?
- what happens on changes of the ids?
Seems like a lot of stuff to me :)
It also seems useful to separate between raw data and a model instance. Sometimes it's more desireable to work on the raw data (especially if you don't need an EventEmitter or convenient methods like save()
, fetch()
or destroy()
.
One possible solution could be that the Resource will always emit and accept raw data in contrast to the Model. But the Resource provides a special link
method to register a specific Modelclass for a model url and extends that Modelclass with convenient-methods like find
, save
, fetch
, etc.
BlogResource.link(Blog);
// calls internally something like
Blog.find = function (ids, params, callback) {
var req = new Request(ids, params),
res = new Response();
BlogResource.emit("readCollection", req, res);
return res;
};
Blog.prototype.save = function () {
var method = this.getId()? "create" : "update",
req = new Request(this.getIds(), null, this.toObject()),
res = new Response();
BlogResource.emit(method, req, res);
return res;
};
// and so on