[BUG] Uploading large files MacOS VFS client fails with
trexxeon opened this issue · 2 comments
Is there an existing issue for this?
- I have searched the existing issues
Current Behavior
When I try to upload larger files (over 500mb) with the desktop client with the Virtual File System for mac. It starts uploading, and practically uploads the whole file. But when it's about to finish it puts an exclamation mark over the Cloud icon and gives me NSFileProviderErrorDomain error -2005
And the nginx.error log gives me this:
[error] 381#381: *1 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 10.0.10.182, server: _, request: "PUT /remote.php/dav/files/username/testfile.iso HTTP/2.0", upstream: "fastcgi://127.0.0.1:9000", host: "cloud.domain.com"
And in the nginx access log I get a 502 bad gateway:
[] "PUT /remote.php/dav/files/username/testfile.iso HTTP/2.0" 502 150 "-" "Nextcloud-macOS/FileProviderExt"
[] "PROPFIND /remote.php/dav/files/username HTTP/2.0" 207 2164 "-" "Nextcloud-macOS/FileProviderExt"
Php error log:
[21-Sep-2024 06:50:31] WARNING: [pool www] child 395 exited on signal 9 (SIGKILL) after 445.420041 seconds from start
[21-Sep-2024 06:50:31] NOTICE: [pool www] child 403 started
I can upload large files (like 10gb) through the webui and the non vfs sync without issues, and I have searched and tried to find a solution and modified so many settings without success, that's why I think it's some kind of bug.
Expected Behavior
It should upload the file without error
Steps To Reproduce
Upload large file with macos desktop client with VFS turned on.
Environment
- OS: Debian 12
- How docker service was installed: https://docs.docker.com/engine/install/debian/
I have added the php-local.ini with these:
upload_max_filesize = 16G
post_max_size = 16G
output_buffering = 0
max_input_time = 3600
max_execution_time = 3500
memory_limit = 16G
default_socket_timeout = 600
I have tried modify the www2.conf as well
; Pool name
[www]
user = abc
group = abc
pm.max_children = 10
pm.max_requests = 500
request_terminate_timeout = 360
### CPU architecture
x86-64
### Docker creation
```bash
services:
nextcloud:
image: lscr.io/linuxserver/nextcloud:develop
container_name: nextcloud
networks:
caddynet:
ipv4_address: 172.20.0.x
environment:
- PUID=1000
- PGID=1000
- TZ=Etc/UTC
volumes:
- /DATA/AppData/nextcloud/config:/config
- /DATA/AppData/nextcloud/tmp:/var/tmp/php
- nextcloud_data:/data
expose:
- 443
restart: unless-stopped
nextcloud-db:
image: mariadb
container_name: nextcloud-mariadb
networks:
caddynet:
ipv4_address: 172.20.0.x
environment:
MYSQL_ROOT_PASSWORD: redacted
MYSQL_DATABASE: redacted
MYSQL_USER: redacted
MYSQL_PASSWORD: redacted
volumes:
- /DATA/AppData/nextcloud/mariadb/data:/var/lib/mysql
- /DATA/AppData/nextcloud/db_dump:/backup
ports:
- "3306:3306"
restart: unless-stopped
redis:
container_name: nextcloud-redis
image: redis:6.2-alpine
restart: always
networks:
caddynet:
ipv4_address: 172.20.0.x
expose:
- '6379'
command: 'redis-server --save 60 1 --loglevel warning --requirepass redacted'
volumes:
- /DATA/AppData/nextcloud/redis:/data
collabora:
container_name: nextcloud-collabora
image: collabora/code:latest
networks:
caddynet:
ipv4_address: 172.20.0.x
cap_add:
- MKNOD
environment:
- domain=cloud\\.redacted\\.com
- username=admin
- password=redacted
expose:
- 9980
restart: always
volumes:
- "/etc/localtime:/etc/localtime:ro"
networks:
caddynet:
external: true
Container logs
Nothing in the logs from the docker container.
Thanks for opening your first issue here! Be sure to follow the relevant issue templates, or risk having this issue marked as invalid.
I'm not sure this is related to a container issue, I would start by getting support from nextcloud themselves as that error seems like a client/macos specific error.