iaconogi/bigSCale2

running function bigscale: Error in vector("list", tot.clusters * tot.clusters) : vector size cannot be infinite

Closed this issue · 12 comments

Thanks for this wonderful tool!
I met this error when using the test data bigSCale2 provide.
library(bigSCale)
data(sce)
sce=bigscale(sce,speed.preset='fast')

[1] "PASSAGE 1) Setting the bins for the expression data ...."
[1] "Pre-processing) Removing null rows "
[1] "Setting the size factors ...."
[1] "Generating the edges ...."
[1] "Creating edges..."
[1] "93.9 % of elements < 10 counts, therefore Using a UMIs compatible binning"
[1] "PASSAGE 2) Storing the Normalized data ...."
[1] "PASSAGE 3) Computing the numerical model (can take from a few minutes to 30 mins) ...."
[1] "Computing Overdispersed genes ..."
[1] "Analyzing 3005 cells for ODgenes, min_ODscore=2.33"
[1] "Discarding skewed genes"
[1] "Using 15596 genes detected in at least >15 cells"
[1] "Further reducing to 15563 geni after discarding skewed genes"
[1] "Determined 1538 overdispersed genes"
[1] "Using 25 PCA components for 1538 genes and 3005 cells"
[1] "Computing t-SNE and UMAP..."
[1] "Computing the markers (slowest part) ..."
Error in vector("list", tot.clusters * tot.clusters) :
vector size cannot be infinite
In addition: Warning message:
In max(clusters) : no non-missing arguments to max; returning -Inf

sessionInfo()
R version 3.6.1 (2019-07-05)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 18362)

Matrix products: default

attached base packages:
[1] parallel stats4 stats graphics grDevices utils datasets methods base

other attached packages:
[1] SingleCellExperiment_1.8.0 SummarizedExperiment_1.16.0 DelayedArray_0.12.0 BiocParallel_1.20.0 matrixStats_0.55.0
[6] Biobase_2.46.0 GenomicRanges_1.38.0 GenomeInfoDb_1.22.0 IRanges_2.20.1 S4Vectors_0.24.0
[11] BiocGenerics_0.32.0 bigSCale_2.0

loaded via a namespace (and not attached):
[1] umap_0.2.3.1 Rcpp_1.0.3 RSpectra_0.15-0 compiler_3.6.1 pillar_1.4.2 XVector_0.26.0
[7] prettyunits_1.0.2 bitops_1.0-6 tools_3.6.1 progress_1.2.2 zlibbioc_1.32.0 zeallot_0.1.0
[13] packrat_0.5.0 jsonlite_1.6 Rtsne_0.15 lifecycle_0.1.0 gtable_0.3.0 tibble_2.1.3
[19] lattice_0.20-38 pkgconfig_2.0.3 rlang_0.4.2 Matrix_1.2-18 rstudioapi_0.10 GenomeInfoDbData_1.2.2
[25] dplyr_0.8.3 askpass_1.1 vctrs_0.2.0 hms_0.5.2 tidyselect_0.2.5 RcppZiggurat_0.1.5
[31] grid_3.6.1 Rfast_1.9.7 reticulate_1.13 glue_1.3.1 R6_2.4.1 purrr_0.3.3
[37] ggplot2_3.2.1 magrittr_1.5 scales_1.1.0 backports_1.1.5 assertthat_0.2.1 colorspace_1.4-1
[43] openssl_1.4.1 lazyeval_0.2.2 munsell_0.5.0 RCurl_1.95-4.12 crayon_1.3.4 zoo_1.8-6

Oh, by the way, I removed the modality param. in the funtion preprocess to avoid an error and got here......

traceback()
5: matrix(vector("list", tot.clusters * tot.clusters), tot.clusters,
tot.clusters)
4: calculate.marker.scores(expr.norm = normcounts(object), clusters = getClusters(object),
N_pct = object@int_metadata$model, edges = object@int_metadata$edges,
lib.size = sizeFactors(object), cap.ones = cap.ones, ...)
3: computeMarkers(sce, speed.preset = speed.preset)
2: computeMarkers(sce, speed.preset = speed.preset)
1: bigscale(sce, speed.preset = "normal")

I believe that the setclust part might be missed in the setDCT function.

I will fix it in the next 24 hours and I'll let you know. Apologies for the bug.

Please let me know if it works with the latest fix.
Best
Giovanni

I made yet another update

Thanks for the updation! I retried it and still got the preprocess error, which could be solved by removing the 'modality = modality' in function 'preprocess' of bigscale. This time the computemarker error was gone!

Also after bigscaling (speed.preset = "fast") the sce in data(sce), most of the Viewxxxxx function reported infinite error. I used the workflow in the Advanced Use in readme and resulted in nearly no error.

I checked the old version of bigscale function and found the storepseudo is no longer in the bigscale pipline. I really like the pseudotime computation and visualisation in this package.

Dear zehualilab
now it should all work.
If you want to create a pseudotime then you should run

sceZ=storePseudo(sceZ)

after having completed the regular pipeline bigscale()

Let me know if you encounter other issues
Best
Giovanni

Thanks for this version. The preprocess error was gone. However, I still got the Viewsignature error after bigscaling sce, the test data. I believe it might be due to failure of constuction the htree of heatmap.

I made a small update about that.
Now you can do

sce=setDistances(sce) # if you have already run this command then there is no need to run it again
sce=storeTransformed(sce)
viewSignatures(sce)

I really reccomand using speed.prest='slow' when computing the markers for having better signatures in viewSignatures()
Best
Giovanni