MyStarInYourSky/openshift-cachet

Upgrading Cachet deployed via Openshift button using SSH

brianjking opened this issue · 13 comments

Is it possible to upgrade Cachet based on https://docs.cachethq.io/docs/updating-cachet once deployed via your OpenShift button?

I know you're working on getting a version 2.0 setup, just thought I'd ask for now.

Thanks!

There is already an upgrade script for v1 working - just gotta hash it out to make it support v2.

There is a backup system for that too - I was just working on that earlier. Should be finished sometime tomorrow so that people can upgrade.

That being said, you will have to wait for the backup system to be online if your using HHVM, due to some issues (See notes on the 1.x-HHVM branch), it is no longer supported. Instructions on how to move to the new setup are available on the 1.x-HHVM branch README.

Tomorrow, I will work to get restores from backups working properly, and instructions will be added on how to do backup/restore.

@ALinuxNinja Looks like my primary build is using the HHVM option, however, when I SSH into the application and run env and cd to the dir described in your instructions I don't see any version numbers, outside of a tar file for 2.0.0?

[cachet-dockdogs.rhcloud.com data]> ls
bin hhvm.pid hhvm.sock mysql_db_dbname mysql_db_host mysql_db_username mysql_dump_snapshot.gz v2.0.0-RC1.tar.gz

Sounds like I may be best off to launch a version from the 1.x branch or staging branch and then upgrade to v2 from there. If so, which branch would you suggest?

Thank you for all that you do!

v1 - staging is internal testing.

By the way, you can just enter "1.2.0" as the version. The version is there just so Cachet will know when to upgrade, and won't needlessly upgrade installations that are running the latest version.

I'm just finishing up the testing of the import script, so it should appear in a few hours.

@ALinuxNinja It looks like there's no way inside of the Cachet dashboard to view this information. I confirmed with the Cachet dev team and their response is here: cachethq/cachet#1181

Ah, that sucks. Guess if you want to export the current version, you can just put "1.2.0" in then and pray that the Cachet developers have tested their database migrations on multi-version hops 😀

That being said, I apologize for that - I was going to add the upgrade script that I was using right now to the HHVM version, but I found that I could no longer deploy the HHVM setup due to the HHVM Cartridge bug.

Well, I'm happy to be your test if you'd like to give me instructions for HHVM.

Just finished the backup/restore features and seems to be working fine.

Give this a try:

echo "1.2.0" > ~/.env/user_vars/CACHET_VERSION
mysqldump -h$DB_HOST -u$DB_USERNAME -p$DB_PASSWORD $DB_DATABASE > $OPENSHIFT_DATA_DIR/dbdump.sql
mkdir -p $OPENSHIFT_DATA_DIR/
cd $OPENSHIFT_REPO_DIR
tar czf $OPENSHIFT_DATA_DIR/backup.tar.gz Cachet -C $OPENSHIFT_DATA_DIR dbdump.sql -C ~/.env/ user_vars

Should create a file named $OPENSHIFT_DATA_DIR/backup.tar.gz

Go to the current 1.x branch, and see the instructions there on how to deploy backup.tar.gz.

How do I go about getting the backup.tar.gz file from the 1x-HHVM instance? @ALinuxNinja

Thanks so much!

Should be able to use scp to transfer it out. Not sure how on Windows - last I checked, Filezilla should be able to do it.

I'm on a mac so SSH connection or Ubuntu 15.10. I'll check shortly, thanks!

Thank you,

Brian J King
On Nov 24, 2015 11:35 PM, "ALinuxNinja" notifications@github.com wrote:

Should be able to use scp to transfer it out. Not sure how on Windows -
last I checked, Filezilla should be able to do it.


Reply to this email directly or view it on GitHub
#2 (comment)
.

@ALinuxNinja I've tried to use the backup.tar.gz file from 1.x-hhvm to upgrade to a 1.x build or a 2.x build and each time when I go to login to the 1.x or 2.x build I'm taken to the setup onboarding process and the dashboard is empty.

I'm thinking my best bet is to simply launch a new 2.x build from the master branch on a new gear and manually copy my components over from 1.x-hhvm.

Thoughts?

Keep up the awesome work, I really appreciate all you're doing!

I did notice the storage space issues happens when I've executed git push origin to push the backup.tar.gz file to the new instance.

remote: tar: Cachet/vendor/roumen/feed/src: Cannot mkdir: No such file or directory
remote: tar: Cachet/vendor/roumen/feed/src/Roumen: Cannot mkdir: No such file or directory
remote: tar: Cachet/vendor/roumen/feed/src/Roumen/Feed: Cannot mkdir: No such file or directory
remote: tar: Cachet/vendor/roumen/feed/src/Roumen/Feed/Facades: Cannot mkdir: No such file or directory
remote: tar: Cachet/vendor/roumen/feed/src/Roumen/Feed/Facades/Feed.php: Cannot open: No such file or directory
remote: tar: Cachet/vendor/roumen/feed/src/Roumen/Feed/Feed.php: Cannot open: No such file or directory
remote: tar: Cachet/vendor/roumen/feed/src/Roumen/Feed/FeedServiceProvider.php: Cannot open: No such file or directory
remote: tar: Cachet/vendor/roumen/feed/src/views: Cannot mkdir: No such file or directory
remote: tar: Cachet/vendor/roumen/feed/src/views/atom.blade.php: Cannot open: No such file or directory
remote: tar: Cachet/vendor/roumen/feed/src/views/rss.blade.php: Cannot open: No such file or directory
remote: tar: Cachet/vendor/roumen/feed/tests: Cannot mkdir: No such file or directory
remote: tar: Cachet/vendor/roumen/feed/tests/FeedTest.php: Cannot open: No such file or directory
remote: tar: Cachet/vendor/autoload.php: Cannot open: Disk quota exceeded
remote: tar: dbdump.sql: Cannot open: Disk quota exceeded
remote: tar: user_vars: Cannot mkdir: Disk quota exceeded
remote: tar: user_vars/DB_HOST: Cannot open: No such file or directory
remote: tar: user_vars/DB_DATABASE: Cannot open: No such file or directory
remote: tar: user_vars/DB_USERNAME: Cannot open: No such file or directory
remote: tar: user_vars/DB_PASSWORD: Cannot open: No such file or directory
remote: tar: user_vars/DB_PORT: Cannot open: No such file or directory
remote: tar: user_vars/CACHET_VERSION: Cannot open: No such file or directory
remote: tar: Cachet/vendor/bin/doctrine-dbal: Cannot create symlink to `../doctrine/dbal/bin/doctrine-dbal': Disk quota exceeded
remote: tar: Cachet/vendor/bin/commonmark: Cannot create symlink to `../league/commonmark/bin/commonmark': Disk quota exceeded
remote: tar: Cachet/vendor/bin/psysh: Cannot create symlink to `../psy/psysh/bin/psysh': Disk quota exceeded
remote: tar: Exiting with failure status due to previous errors
remote: /var/lib/openshift/565ef4152d52711d73000131/app-root/runtime/repo/.openshift/action_hooks/deploy: line 38: dbdump.sql: No such file or directory
remote: cp: cannot stat `user_vars/CACHET_VERSION': No such file or directory
remote: rm: cannot remove `/var/lib/openshift/565ef4152d52711d73000131/app-root/runtime/repo//dbdump.sql': No such file or directory
remote: > rm -f compiled.php config.php routes.php services.json
remote: Loading composer repositories with package information
remote: Installing dependencies from lock file
remote:   - Installing danielstjules/stringy (1.10.0)
remote:     Cloning 4749c205db47ee5b32e8d1adf6d9aff8db6caf3b
remote:
remote:   - Installing classpreloader/classpreloader (2.0.0)
remote:     Cloning 8c3c14b10309e3b40bce833913a6c0c0b8c8f962
remote:
remote:   - Installing laravel/framework (5.1.x-dev ea5f078)
remote:     Cloning ea5f078bbbdf7e06bac8b107bdaae50940fef567
remote:
remote:   - Installing guzzlehttp/psr7 (1.2.0)
remote:     Cloning 4ef919b0cf3b1989523138b60163bbcb7ba1ff7e
remote:
remote:   - Installing guzzlehttp/promises (1.0.3)
remote:     Cloning b1e1c0d55f8083c71eda2c28c12a228d708294ea
remote:
remote:   - Installing guzzlehttp/guzzle (6.1.0)
remote:     Cloning 66fd14b4d0b8f2389eaf37c5458608c7cb793a81
remote:
remote:   - Installing graham-campbell/markdown (v5.1.0)
remote:     Cloning 990e9ef977376331ba1ed50fe36adfbe7e34677f
remote:
remote:   - Installing alt-three/emoji (v2.0.0)
remote:     Cloning 75ce4b4d09479a96a1a087598f8c8322d88bbeac
remote:
remote:   - Installing alt-three/validator (v1.3.0)
remote:     Cloning 4325ea481844a230e0752b7f4860f9360981aa11
remote:
remote:   - Installing asm89/stack-cors (0.2.1)
remote:     Cloning 2d77e77251a434e4527315313a672f5801b29fa2
remote:
remote:   - Installing barryvdh/laravel-cors (v0.5.0)
remote:     Cloning 9fb31c457a416931684c7b8759751d1848aed5a8
remote:
remote:   - Installing doctrine/dbal (v2.5.2)
remote:     Cloning 01dbcbc5cd0a913d751418e635434a18a2f2a75c
remote:
remote:   - Installing fideloper/proxy (3.0.0)
remote:     Cloning cc7937c3b4e285e24e020beeaff331180a641e6d
remote:
remote:   - Installing graham-campbell/security (v3.3.0)
remote:     Cloning 3211aec137362c61dbd28ae8b01920c6d3da82fd
remote:
remote:   - Installing graham-campbell/binput (v3.2.1)
remote:     Cloning d0053d670ed0e71868ce8784f2fffa96f8007bf7
remote:
remote:   - Installing graham-campbell/core (v4.1.1)
remote:     Cloning 8a399ac87aa90b064859cfa56e80e874fb910c85
remote:
remote:   - Installing filp/whoops (1.1.7)
remote:     Cloning 72538eeb70bbfb11964412a3d098d109efd012f7
remote:
remote:   - Installing graham-campbell/exceptions (v5.0.0)
remote:     Cloning a62d5d189032f71e1cd97914ebb59c9a7f4b4767
remote:
remote:   - Installing graham-campbell/throttle (v5.0.0)
remote:     Cloning 7c5a10d252732be491a6dd8306bb6969b5bc963b
remote:
remote:   - Installing jenssegers/date (v3.0.10)
remote:     Cloning 467278308153eb27048e471213366193587d87a0
remote:
remote:   - Installing mccool/laravel-auto-presenter (4.1.0)
remote:     Cloning 3f1c9d3ef8a1cc5d78dea2654058145c2920f25e
remote:
remote:   - Installing pragmarx/google2fa (v0.5.0)
remote:     Cloning ff92a3f979b7807a2219f895db92b5c28f2a5498
remote:
remote:   - Installing roumen/feed (v2.9.3)
remote:     Cloning 91a7af5e7252f95d2fa8694f214f50e2866c1e59
remote:
remote: Generating optimized autoload files
remote: > php artisan optimize --force
remote: Generating optimized class loader
remote: Compiling common classes
remote: > php artisan config:cache
remote: Configuration cache cleared!
remote: Configuration cached successfully!
remote: > php artisan route:cache
remote: Route cache cleared!
remote: Routes cached successfully!
remote: > chmod -R 755 storage
remote: Migration table created successfully.
remote: Migrated: 2015_01_05_201324_CreateComponentGroupsTable
remote: Migrated: 2015_01_05_201444_CreateComponentsTable
remote: Migrated: 2015_01_05_202446_CreateIncidentTemplatesTable
remote: Migrated: 2015_01_05_202609_CreateIncidentsTable
remote: Migrated: 2015_01_05_202730_CreateMetricPointsTable
remote: Migrated: 2015_01_05_202826_CreateMetricsTable
remote: Migrated: 2015_01_05_203014_CreateSettingsTable
remote: Migrated: 2015_01_05_203235_CreateSubscribersTable
remote: Migrated: 2015_01_05_203341_CreateUsersTable
remote: Migrated: 2015_01_09_083419_AlterTableUsersAdd2FA
remote: Migrated: 2015_01_16_083825_CreateTagsTable
remote: Migrated: 2015_01_16_084030_CreateComponentTagTable
remote: Migrated: 2015_02_28_214642_UpdateIncidentsAddScheduledAt
remote: Migrated: 2015_05_19_214534_AlterTableComponentGroupsAddOrder
remote: Migrated: 2015_05_20_073041_AlterTableIncidentsAddVisibileColumn
remote: Migrated: 2015_05_24_210939_create_jobs_table
remote: Migrated: 2015_05_24_210948_create_failed_jobs_table
remote: Migrated: 2015_06_10_122216_AlterTableComponentsDropUserIdColumn
remote: Migrated: 2015_06_10_122229_AlterTableIncidentsDropUserIdColumn
remote: Migrated: 2015_08_02_120436_AlterTableSubscribersRemoveDeletedAt
remote: Migrated: 2015_08_13_214123_AlterTableMetricsAddDecimalPlacesColumn
remote: Checking configuration
remote: - php-fpm.ini: No change
remote: - php-fpm.conf: No change
remote: - php.ini: No change
remote: PHP-FPM already running
remote: Nginx instance is started
remote: -------------------------
remote: Git Post-Receive Result: success
remote: Activation status: success
remote: Deployment completed with status: success
To ssh://565ef4152d52711d73000131@cachet1x-dockdogs.rhcloud.com/~/git/cachet1x.git/
   554da6f..ee1fdfa  master -> master```

I've never run into that, probably because your SQL DB is too large for a restore. Not sure, but the cartridges probably take up some extra space so there was not much to begin with :| You will probably have to go with a larger size container if you want restores working.

Probably best to start with v2 if your running into that.

That being said, future restores may have the same issue - not much I can do about it since this is a disk space issue.