[BUG] Different Segementation Result between GUI and Python command
Opened this issue · 4 comments
Hello! I got different segmentation results using the Cellpose GUI and the Python command line, and I'm very confused. Could you please explain why this is happening? Am I missing any parameters? Thank you!
when I use command:
its segmentation results is supplied.
but when I use GUI, the result is perfect:
(cellpose) D:>python -m cellpose --Zstack
2024-10-10 19:21:38,176 [INFO] WRITING LOG OUTPUT TO C:\Users\Administrator.DESKTOP-286EQ11.cellpose\run.log
2024-10-10 19:21:38,177 [INFO]
cellpose version: 3.0.11
platform: win32
python version: 3.10.15
torch version: 2.4.1+cu118
2024-10-10 19:21:38,339 [INFO] ** TORCH CUDA version installed and working. **
GUI_INFO: loading image: E:/温敏分子行为/D218/2024-09-08/3DTexture/3Dvolume/2.MedianFilter/10.tiff
2024-10-10 19:22:05,982 [INFO] reading tiff with 52 planes
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 52/52 [00:00<00:00, 800.01it/s]
GUI_INFO: converted to float and normalized values to 0.0->255.0
GUI_INFO: normalization checked: computing saturation levels (and optionally filtered image)
{'lowhigh': None, 'percentile': [1.0, 99.0], 'normalize': True, 'norm3D': True, 'sharpen_radius': 0, 'smooth_radius': 0, 'tile_norm_blocksize': 0, 'tile_norm_smooth3D': 1, 'invert': False}
[0, 255.0]
2024-10-10 19:22:28,696 [INFO] ** TORCH CUDA version installed and working. **
2024-10-10 19:22:28,696 [INFO] >>>> using GPU (CUDA)
2024-10-10 19:22:28,697 [INFO] >> cyto3 << model set to be used
2024-10-10 19:22:28,728 [INFO] >>>> loading model C:\Users\Administrator.DESKTOP-286EQ11.cellpose\models\cyto3
2024-10-10 19:22:28,761 [INFO] >>>> model diam_mean = 30.000 (ROIs rescaled to this size during training)
{'lowhigh': None, 'percentile': [1.0, 99.0], 'normalize': True, 'norm3D': True, 'sharpen_radius': 0, 'smooth_radius': 0, 'tile_norm_blocksize': 0, 'tile_norm_smooth3D': 1, 'invert': False}
2024-10-10 19:22:28,795 [INFO] channels set to [0, 0]
2024-10-10 19:22:28,795 [INFO] ~~~ FINDING MASKS ~~~
2024-10-10 19:22:28,795 [INFO] multi-stack tiff read in as having 52 planes 1 channels
2024-10-10 19:22:30,131 [INFO] running YX: 52 planes of size (1365, 1254)
2024-10-10 19:22:30,137 [INFO] 0%| | 0/7 [00:00<?, ?it/s]
2024-10-10 19:22:30,380 [INFO] 14%|#4 | 1/7 [00:00<00:01, 4.12it/s]
2024-10-10 19:22:30,489 [INFO] 29%|##8 | 2/7 [00:00<00:00, 6.09it/s]
2024-10-10 19:22:30,594 [INFO] 43%|####2 | 3/7 [00:00<00:00, 7.29it/s]
2024-10-10 19:22:30,696 [INFO] 57%|#####7 | 4/7 [00:00<00:00, 8.11it/s]
2024-10-10 19:22:30,799 [INFO] 71%|#######1 | 5/7 [00:00<00:00, 8.62it/s]
2024-10-10 19:22:30,901 [INFO] 86%|########5 | 6/7 [00:00<00:00, 8.99it/s]
2024-10-10 19:22:30,996 [INFO] 100%|##########| 7/7 [00:00<00:00, 8.15it/s]
2024-10-10 19:22:31,582 [INFO] running ZY: 1365 planes of size (52, 1254)
2024-10-10 19:22:31,604 [INFO] 0%| | 0/86 [00:00<?, ?it/s]
2024-10-10 19:22:31,712 [INFO] 9%|9 | 8/86 [00:00<00:01, 74.35it/s]
2024-10-10 19:22:31,820 [INFO] 20%|#9 | 17/86 [00:00<00:00, 79.21it/s]
2024-10-10 19:22:31,929 [INFO] 30%|### | 26/86 [00:00<00:00, 80.75it/s]
2024-10-10 19:22:32,037 [INFO] 41%|#### | 35/86 [00:00<00:00, 81.77it/s]
2024-10-10 19:22:32,145 [INFO] 51%|#####1 | 44/86 [00:00<00:00, 82.33it/s]
2024-10-10 19:22:32,254 [INFO] 62%|######1 | 53/86 [00:00<00:00, 82.41it/s]
2024-10-10 19:22:32,363 [INFO] 72%|#######2 | 62/86 [00:00<00:00, 82.46it/s]
2024-10-10 19:22:32,474 [INFO] 83%|########2 | 71/86 [00:00<00:00, 82.18it/s]
2024-10-10 19:22:32,584 [INFO] 93%|#########3| 80/86 [00:00<00:00, 82.07it/s]
2024-10-10 19:22:32,655 [INFO] 100%|##########| 86/86 [00:01<00:00, 81.84it/s]
2024-10-10 19:22:33,542 [INFO] running ZX: 1254 planes of size (52, 1365)
2024-10-10 19:22:33,564 [INFO] 0%| | 0/79 [00:00<?, ?it/s]
2024-10-10 19:22:33,674 [INFO] 11%|#1 | 9/79 [00:00<00:00, 81.82it/s]
2024-10-10 19:22:33,785 [INFO] 23%|##2 | 18/79 [00:00<00:00, 81.38it/s]
2024-10-10 19:22:33,897 [INFO] 34%|###4 | 27/79 [00:00<00:00, 80.91it/s]
2024-10-10 19:22:34,009 [INFO] 46%|####5 | 36/79 [00:00<00:00, 80.69it/s]
2024-10-10 19:22:34,119 [INFO] 57%|#####6 | 45/79 [00:00<00:00, 81.09it/s]
2024-10-10 19:22:34,228 [INFO] 68%|######8 | 54/79 [00:00<00:00, 81.59it/s]
2024-10-10 19:22:34,337 [INFO] 80%|#######9 | 63/79 [00:00<00:00, 81.91it/s]
2024-10-10 19:22:34,446 [INFO] 91%|#########1| 72/79 [00:00<00:00, 82.12it/s]
2024-10-10 19:22:34,531 [INFO] 100%|##########| 79/79 [00:00<00:00, 81.78it/s]
2024-10-10 19:22:35,381 [INFO] network run in 6.41s
2024-10-10 19:23:11,285 [INFO] masks created in 35.90s
2024-10-10 19:23:14,383 [INFO] >>>> TOTAL TIME 45.59 sec
2024-10-10 19:23:15,903 [INFO] 11 cells found with model in 47.225 sec
GUI_INFO: 11 masks found
GUI_INFO: plane 0 outlines processed
GUI_INFO: plane 50 outlines processed
GUI_INFO: creating cellcolors and drawing masks
Due to my work on morphological and texture analysis of time series 3D layered scanning images, high-throughput segmentation of the images is essential. I sincerely seek your guidance. Thank you!
do you think that edge removal function is the issue? EDIT: I see you are saving the masks
not masks_cleaned
.
when you run cellpose from the API, please run after saying io.logger_setup()
so you can see if it prints the same thing as the GUI, and paste it here so we can take a look
Hello @carsen-stringer
I want to start by saying thank you for making this freely accessible tool, as it has been very helpful!
I am experiencing a similar problem and have not been able to resolve it. To preface, I am not an image analysis nor computational expert.
Problem: my masks as outputted in running cellpose in Jupyter Notebook python script look starkly different from that ran in the GUI.
I have not been able to determine why this is the case, as I thought, to the best of my knowledge, that I included the necessary parameters in my script. Any help resolving this would be greatly appreciated!
https://drive.google.com/file/d/1-estVYbUV3xZkdTRrcHUqMFTq5GpTjbI/view?usp=drivesdk
Context: I trained my own model using CellPose 3.1.0 for image segmentation using the GUI in a cellpose-env using Anaconda Prompt. My python version is 3.9.20 and torch version is 2.5.1. I started by using the Cyto3 model with default additional settings and applied the cyto3 deblur, then ran. With each new image that popped up after training the model, I continued to work within the human-in-the-loop over about 20-30 images. I used default parameters except for n_epochs = 300. My images are single channel images that I segmented in grayscale.
Code:
import os
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
import numpy as np
import pandas as pd
from tqdm import tqdm
from PIL import Image
import matplotlib.pyplot as plt
from cellpose import models, io, core, denoise
import time
import psutil # For system monitoring
from tqdm import tqdm
from skimage.color import label2rgb
from skimage.measure import regionprops
# Constants for system monitoring
CPU_LIMIT = 80 # Percent CPU usage limit to trigger a sleep
RAM_LIMIT = 80 # Percent RAM usage limit to trigger a sleep
BATCH_SIZE = 10 # Number of images to process in one batch
# Custom parameters to match GUI settings - This is model specific
# and should only be changed when a model is updated.
# SKOV3 Parameter Set
CELL_DIAMETER = 86 # Replace with the diameter you used in the GUI
FLOW_THRESHOLD = 0.4 # Adjust if you used a different threshold
import torch
print("CUDA available:", torch.cuda.is_available())
print("GPU device count:", torch.cuda.device_count())
print("Current device:", torch.cuda.current_device())
print("Device name:", torch.cuda.get_device_name(torch.cuda.current_device()))
# System monitoring function to prevent system overload
def monitor_system():
while psutil.cpu_percent() > CPU_LIMIT or psutil.virtual_memory().percent > RAM_LIMIT:
print("System under high load. Waiting for resources to free up...")
time.sleep(2) # Wait and check again
# Function to load all "Blue.tiff" files in the directory
def get_blue_files(dir):
return [f for f in os.listdir(dir) if f.endswith("Blue.tiff")]
# Function to load the image data as an array of pixels
def pull_image(loc):
tiff = Image.open(loc)
return np.array(tiff)
# Function to count the number of cells and calculate total cell area in the segmented mask
def analyze_mask(mask):
cell_count = np.max(mask) # Number of unique labels (cells)
cell_area = sum([region.area for region in regionprops(mask)]) # Total area covered by cells
return cell_count, cell_area
# Function to calculate total RFU (Integrated Fluorescence Intensity) for each image
def calculate_total_rfu(image):
return np.sum(image) # Total RFU is the sum of all pixel intensities
# Function to count the number of cells and calculate total cell area in the segmented mask
def analyze_mask(mask):
cell_count = np.max(mask) # Number of unique labels (cells)
cell_area = sum([region.area for region in regionprops(mask)]) # Total area covered by cells
return cell_count, cell_area
# Function to visualize image with mask overlay
def show_image_with_mask(image, mask, title="Image with Mask"):
# Use label2rgb to overlay the mask on the original image
overlay = label2rgb(mask, image=image, bg_label=0, alpha=0.5) # Adjust alpha for transparency
plt.figure(figsize=(8, 8))
plt.imshow(overlay)
plt.title(title)
plt.axis('off')
plt.show()
# Batch processing function for optimization
def batch_process_files(file_list, batch_size):
for i in range(0, len(file_list), batch_size):
yield file_list[i:i + batch_size]
# Function to process images by cell line
def process_files_by_cell_line(dir, cell_line_models):
results = []
# Initialize the Cellpose deblur model
deblur_model = denoise.CellposeDenoiseModel(gpu=core.use_gpu(), model_type="cyto3", restore_type="deblur_cyto3")
# Loop through each cell line entry in the dictionary
for cell_line, info in cell_line_models.items():
model_path = info['model_path']
positions = info['positions']
# Initialize the Cellpose segmentation model for the specific cell line
seg_model = models.CellposeModel(gpu=core.use_gpu(), pretrained_model=model_path)
# Get all "Blue.tiff" files in the directory
blue_files = get_blue_files(dir)
# Filter files based on positions (image names containing the specified strings)
target_files = [f for f in blue_files if any(pos in f for pos in positions)]
#Batch processing
for file_batch in batch_process_files(target_files, BATCH_SIZE):
monitor_system() # Check system load before processing each batch
for file_name in tqdm(file_batch, desc=f"Processing {cell_line} images"):
img_path = os.path.join(dir, file_name)
img = pull_image(img_path)
# Deblur the image
deblurred_img = deblur_model.eval([img], channels=[0, 0], diameter=CELL_DIAMETER)[0][0]
# Apply the custom Cellpose segmentation model to the deblurred image
masks, flows, styles = seg_model.eval(
[deblurred_img],
diameter=CELL_DIAMETER, # Set to match GUI settings
flow_threshold=FLOW_THRESHOLD, # Set to match GUI
channels=[0, 0] # Modify if you use different channels in the GUI
)
# Analyze mask to get cell count and total cell area
cell_count, cell_area = analyze_mask(masks[0])
# Calculate total integrated fluorescence intensity (RFU) for the image
total_rfu = calculate_total_rfu(deblurred_img)
# Display the image with mask overlay (optional for debugging/visualization)
show_image_with_mask(img, masks[0], title=f"{file_name} - {cell_line}")
# Append results for each processed image
results.append({
"file_name": file_name,
"cell_line": cell_line,
"cell_count": cell_count,
"cell_area": cell_area,
"total_rfu": total_rfu
})
# Convert results to DataFrame
df = pd.DataFrame(results)
return df
# Example usage
directory = r"C:\Users\CCG2 - G14\Desktop\EXP274 Challenge 1 Full Transfer\SKOV3 Cell Killing Detection Test"
# Define models and positions for each cell line
cell_line_models = {
"SKOV3": {
"model_path": r"C:\Users\CCG2 - G14\Desktop\EXP274 Challenge 1 Full Transfer\SKOV3 Model Training\models\SKOV3 Model 1.0",
"positions": [f"_B{i}_" for i in range(1, 13)] # "_A1_" through "_A12_"
}
}
# Run the processing function
df = process_files_by_cell_line(directory, cell_line_models)
# Display and save results
print(df)
df.to_csv("cellpose_cell_counts_by_cell_line.csv", index=False)
Zip File to Code
SKOV3 Segmentation Test.zip
do you think that edge removal function is the issue? EDIT: I see you are saving the
masks
notmasks_cleaned
.when you run cellpose from the API, please run after saying
io.logger_setup()
so you can see if it prints the same thing as the GUI, and paste it here so we can take a look
Hello! Dear Carsen-Stringer, I'm sorry for the delayed reply. I don’t think “edge removal function is the issue,” because I’ve tried running it with and without that function. This is the information from the API, which is the same as in the GUI.
io.logger_setup()
2024-11-08 10:55:39,031 [INFO] WRITING LOG OUTPUT TO C:\Users\Administrator.DESKTOP-286EQ11.cellpose\run.log
2024-11-08 10:55:39,031 [INFO]
cellpose version: 3.0.11
platform: win32
python version: 3.10.15
torch version: 2.4.1+cu118
(<Logger cellpose.io (INFO)>, WindowsPath('C:/Users/Administrator.DESKTOP-286EQ11/.cellpose/run.log'))
I find that Draven-Rane is experiencing a similar problem.
This is my Image.
10.zip
do you think that edge removal function is the issue? EDIT: I see you are saving the
masks
notmasks_cleaned
.when you run cellpose from the API, please run after saying
io.logger_setup()
so you can see if it prints the same thing as the GUI, and paste it here so we can take a look
I have found the reason. I dont know why, but when I give up using "for”,Instead of "def+list", it works the same as in GUI.
This is my code:
import tifffile
from cellpose import models, io, denoise
import os
from pathlib import Path
input_dir = Path('F:/温敏分子行为/D218/2024-10-31/Control_MIP/')
output_dir = Path('F:/温敏分子行为/D218/2024-10-31/Segmentation/')
output_dir.mkdir(parents=True, exist_ok=True)
model = models.Cellpose(model_type='nuclei', gpu=True)
dn_model = denoise.DenoiseModel(model_type="denoise_nuclei", gpu=True)
def process_file(file_path):
file_name = file_path.name
print(f"Processing file: {file_name}")
data = tifffile.imread(file_path)
print(f"Shape: {data.shape}")
print("Performing denoising...")
data_denoised = dn_model.eval(data, channels=None, do_3D=True, diameter=140)
print("Performing segmentation...")
masks, flows, styles, diams = model.eval(
data_denoised,
diameter=140,
channels=[0, 0],
normalize={
'lowhigh': None,
'percentile': [1.0, 99.0],
'normalize': True,
'norm3D': True,
'sharpen_radius': 0,
'smooth_radius': 0,
'tile_norm_blocksize': 0,
'tile_norm_smooth3D': 1,
'invert': False
},
flow_threshold=0.4,
cellprob_threshold=0,
do_3D=True,
anisotropy=2.5,
min_size=100
)
output_file_name = file_name.replace('.tiff', '_masks.tiff').replace('.tif', '_masks.tiff')
output_path = output_dir / output_file_name
io.save_masks(data_denoised, masks, flows, file_names=output_path, png=False, tif=True, channels=[0, 0])
print(f"Saved masks to {output_path}")
tiff_files = list(input_dir.glob('.tiff')) + list(input_dir.glob('.tif'))
list(map(process_file, tiff_files))