Cleafy/promqueen

[Question] How to use promplay?

Closed this issue · 2 comments

First of all, thank you for this app.

My goal

  1. Export data from Prometheus which runs on server X
  2. Import data to Prometheus which runs on server Y

How to replicate

  1. Server X runs Prometheus on port 9090, server Y on port 9091
  2. Export data with promrec (5s interval, 3 iterations)
    promrec --output="./data/metrics" -u service.name=http://localhost:9090/metrics -i 5s -n 3
    Output file is metrics
  3. Using vim to find and replace the port numbers from 9090 to 9091 in metrics file
    :%s/:9090/:9091/g
  4. Import to server Y This is where I'm having trouble, assuming I'm in a different directory
    promplay --nopromcfg --dir=.  --storage.path="data"

Output:

INFO[0000] Loading series map and head chunks...         source="storage.go:373"
WARN[0000] Persistence layer appears dirty.              source="persistence.go:815"
WARN[0000] Starting crash recovery. Prometheus is inoperational until complete.  source="crashrecovery.go:40"
WARN[0000] To avoid crash recovery in the future, shut down Prometheus with SIGTERM or a HTTP POST to /-/quit.  source="crashrecovery.go:41"
INFO[0000] Scanning files.                               source="crashrecovery.go:55"
INFO[0000] File scan complete. 0 series found.           source="crashrecovery.go:83"
INFO[0000] Checking for series without series file.      source="crashrecovery.go:85"
INFO[0000] Check for series without series file complete.  source="crashrecovery.go:130"
INFO[0000] Cleaning up archive indexes.                  source="crashrecovery.go:402"
INFO[0000] Clean-up of archive indexes complete.         source="crashrecovery.go:493"
INFO[0000] Rebuilding label indexes.                     source="crashrecovery.go:501"
INFO[0000] Indexing metrics in memory.                   source="crashrecovery.go:502"
INFO[0000] Indexing archived metrics.                    source="crashrecovery.go:510"
INFO[0000] All requests for rebuilding the label indexes queued. (Actual processing may lag behind.)  source="crashrecovery.go:529"
WARN[0000] Crash recovery complete.                      source="crashrecovery.go:152"
INFO[0000] 0 series loaded.                              source="storage.go:378"
Frames processed: [-----------------------------------------------------------------------------------------------] 0s 100.00%
INFO[0000] Stopping local storage...                     source="storage.go:396"
INFO[0000] Stopping maintenance loop...                  source="storage.go:398"
INFO[0000] Maintenance loop stopped.                     source="storage.go:1259"
INFO[0000] Stopping series quarantining...               source="storage.go:402"
INFO[0000] Series quarantining stopped.                  source="storage.go:1701"
INFO[0000] Stopping chunk eviction...                    source="storage.go:406"
INFO[0000] Chunk eviction stopped.                       source="storage.go:1079"
INFO[0000] Checkpointing in-memory metrics and chunks...  source="persistence.go:612"
INFO[0000] Done checkpointing in-memory metrics and chunks in 9.896598ms.  source="persistence.go:639"
INFO[0000] Checkpointing fingerprint mappings...         source="persistence.go:1480"
INFO[0000] Done checkpointing fingerprint mappings in 4.744828ms.  source="persistence.go:1503"
INFO[0000] Local storage stopped.                        source="storage.go:421"

Zero series were loaded, but I can still see the data directory

data
├── VERSION
├── archived_fingerprint_to_metric
├── archived_fingerprint_to_timerange
├── heads.db
├── labelname_to_labelvalues
├── labelpair_to_fingerprints
└── mappings.db

How do I import this data to Prometheus? I thought that the below example should be the expected output, and then I should be able to copy the hashes to the proper Prometheus data folder.

data
├── 01E9WPHANY7DX13M80X3SM909T
├── 01E9ZHMENNSDYHPJRP0KG8G5SQ
├── 01E9ZHMHZJ1CSR8BV2AKD6KS17
├── 01E9ZPNFQRYE43VEZ60Z7ZVB31
├── lock
├── queries.active
├── snapshots
└── wal

May you please share the proper usage of this great tool ?

@unfor19 Were you able to get it working ?

I am also newbie to this tool. Still figuring out how to use this tool.

@dineshba I closed this issue since I gave up using this tool.

I'm using node-exporter with a textfile collector instead, you can read more about it here - https://www.robustperception.io/using-the-textfile-collector-from-a-shell-script