flekschas/hipiler

Help creating a config file to view my own data

liz-is opened this issue · 8 comments

Hi there,

I've made a csv config file to view my own data based on the example but when I drag-and-drop I get a "could not load data" error on the right hand side and the left hand side just "Loading...". Drag-and-drop works fine with the example csv file, though.

For the server column, I've tried both "localhost" and my private IP. Either of these works for HiGlass config files for my local instance of HiGlass. Does this need the "//" prefix? I don't know why this is included in the example and it didn't seem to make a difference for me.

I used the tileset UUID from my local instance of HiGlass for the dataset column.

I am using a custom coordinate system, so maybe that's causing issues?

Should I create a full .json config file instead?

Any help would be appreciated!

Thanks for reaching out and sorry about the issue :(

My best guess is that this issue is somehow related to higlass-server. Do you know which version you are running? If you're using higlass-docker then you should find the server version at http://your-server-ip/version.txt

Also, in terms of your config, are you using a CSV or JSON file? The linked example is using the CSV syntax.

An easy way to test if the problem is due to the server or the frontend is to query the server manually and see if you get back the correct data.

To manually extract a snippet from a Hi-C map using higlass-server, you can run the following command in the terminal of your choice:

curl -H 'Content-Type: application/json' -d '[["4",69285000,70075000,"4",69285000,70075000,"CQMd6V_cRw6iCI_-Unl3PQ",0]]' https://higlass.io/api/v1/fragments_by_loci/?dims=2

The response should be

{"fragments": [[[0.3818360760287786, 0.0], [0.0, 1.0]]], "indices": [0], "dataTypes": ["matrix"]}

(We extracted a TAD with the final resolution of 2x2 pixels.)

The argument of -d is structured as follows:

[
  ["chr1", start1, end1, "chr2", start2, end2, "uuid", 0]
]

The last 0 tells the browser to use the optimal aggregation for cutting out the data based on your output size. chr1 and chr2 should be some chromosome names.

Regarding your question about //, these double slashes are just a way to tell the browser to use the same protocol that was used for loading the website. E.g., if you load hipiler via http it uses http to query http://higlass.io. If you load HiPiler via https it will query https://higlass.io. This is only needed because cross-protocol requests are blocked in some browsers.

If you like, I can look at your config. The specific annotations you would like to extract don't matter too much. So you could only define one annotation, and that annotation could be random also.

FYI, also feel free to join our Slack channel at http://bit.ly/higlass-slack for faster conversations :)

Ah, then maybe the issue is that I never interact with higlass-server directly but only through higlass-manage.

I am using a CSV config file. It seemed the easier way to start!

I could access the version info at http://localhost:8989/version.txt - do I need to add the port to the server URL in the config file? Here's all the version info:

SERVER_VERSION: 1.11.2
WEB_APP_VERSION: 1.1.7
LIBRARY_VERSION: 1.6.10
HIPILER_VERSION: 1.3.1
MULTIVEC_VERSION: 0.2.1
CLODIUS_VERSION: 0.10.13
TIME_INTERVAL_TRACK_VERSION: 0.2.0-rc.2
LINEAR_LABELS_TRACK: <LINEAR_LABELS_TRACK>
LABELLED_POINTS_TRACK: <LABELLED_POINTS_TRACK>
BEDLIKE_TRIANGLES_TRACK_VERSION: 0.1.2
RANGE_TRACK_VERSION: 0.1.1
PILEUP_VERSION: 0.1.1

I tried the command you suggested, modified for one of my datasets and regions:
curl -H 'Content-Type: application/json' -d '[["X",1484002,1486000,"X",1512002,1514000,"W-ODnJmpT1OOX9SWreO5UQ",0]]' http://localhost:8989/api/v1/fragments_by_loci/?dims=2

And got this error:
{"error": "Could not convert loci.", "error_message": "\"Can't open attribute (Can't locate attribute: 'max-zoom')\""}

FYI, also feel free to join our Slack channel at http://bit.ly/higlass-slack for faster conversations :)

Ah yeah, I have actually joined but I keep forgetting to use it! I'm happy to move this discussion there if you prefer.

Thanks for the information.

I found the issue: "Can't locate attribute: 'max-zoom'" It seems like your cooler file doesn't have a max-zoom attribute. Let me reach out to Nezar if this attribute was removed in newer version of Cooler. If this is the case it would explain why HiPiler works on older files but not on newly generated .cools

Okay, I can confirm new cooler files don't have the max-zoom property anymore so I have to update higlass-server.

It turned out that HiPiler wasn't compatible with multires cooler v2 files. I just added a PR to higlass-server to fix this issue (higlass/higlass-server#114). Once the PR is merged we can update higlass-docker and higlass.io.

Thanks a lot!

Just to check: higlass-manage automatically uses the most recent Docker image, is that correct? So once higlass-docker is updated I shouldn't need to update anything on my machine?

The PR is merged. We just have to wait until a new docker build is issued. Hopefully by the end of the week. To update docker with higlass-manage all you have to do is to issue a restart/

Great! I'll close this issue then :)