The idea is implement a simple way to describe what and how should the data be exported.
Example:
Export.transform User do
replace :full_name, -> { FFaker::Name.name }
replace :password, 'password'
replace :email, -> (r) { "#{r.email.split('@').first}@example.com" }
ignore :created_at, :updated_at
endAnd then is possible to apply your rules with the tables:
dump = Export::TransformData.new(User)
result = dump.process(User.all)And export the tranformed values to a file:
File.open('results.json', 'w+') {|f|f.puts result.to_json }Currently you can also specify a dump schema, to fetch a specific scenario of data:
Export.dump 'last 3 monts user' do
model User, -> { where(["created_at > ?", 3.months.ago]) }
ignore AuditableItem
on_fetch_data {|table, data| puts "Exported #{data.size} from #{table}" }
on_fetch_error do |table, error, full_trace|
puts "Ops! something goes wrong importing #{table}", error, full_trace
end
endImagine that you also have a table orders that depends on users. It will
automatically load only the orders related to the current user.
The same will happen with order_items that depends of what orders are being
exported.
allinclude all records from n*tablesmodelreceives a class and allow you specify a scope directly in the model class.
We have an example on examples/rails_test that you can use:
cd examples/rails_test
rake db:setup # Populate with some seeds
rake export:init # Generate the default configurationIt will generate an initial suggestion with setting up the configuration:
Export.table 'users' do
replace :full_name, -> { FFaker::Name.name }
replace :password, 'password'
replace :email, -> (r) { "#{r.email.split('@').first}@example.com" }
end
Export.table 'addresses'
To test all the process:
rake export:dump
Check the normalized results under results.json file.
- Make it load the generated dump file
- Explore SQL, yml and and other data formats
- Port
lib/tasks/export.rakefrom rails example to the lib - Allow use
fake :full_namesyntax
Export.table 'users' do
fake :full_name
fake :email
end