bit-team/backintime

`qt5_probing.py` makes `xorg.bin` run with high CPU usage, eating RAM

noyannus opened this issue · 120 comments

When backintime runs its qt5_probing.py, xorg.bin consumes a full CPU (~97..~102%). This happens only with the
/usr/bin/python3 /usr/share/backintime/common/qt5_probing.py processes. Quickly after they are killed, xorg.bin is back to normal.
Also RAM and swap fill up. If I don't kill the qt5_probing.py in time, the machine becomes unresponsive (and hot, duh).
Maybe related: their CPU loads are high themselves.

Before killing:

$ top -c
...
PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
2286 root      20   0 1569896 129604  88988 R 100,0 0,397  16:37.40 /usr/bin/Xorg.bin -nolisten tcp -background none -seat seat0 vt2 -auth /run/sddm/xauth_AwPWFy -noreset -displayfd 16
4336 root      39  19  190912  45644  30924 S 13,00 0,140   3:43.98 /usr/bin/python3 /usr/share/backintime/common/qt5_probing.py
4337 root      39  19  190912  45136  30672 S 13,00 0,138   3:44.16 /usr/bin/python3 /usr/share/backintime/common/qt5_probing.py
5468 root      39  19  190912  45260  30668 S 12,67 0,139   0:12.71 /usr/bin/python3 /usr/share/backintime/common/qt5_probing.py
5467 root      39  19  190912  45564  30844 S 12,00 0,140   0:12.43 /usr/bin/python3 /usr/share/backintime/common/qt5_probing.py
2857 me        20   0 4057896 477992 202676 S 6,667 1,464   1:20.04 /usr/bin/plasmashell --no-respawn
4334 root      39  19 1866060 1,764g   4096 S 5,333 5,667   1:20.48 python3 -Es /usr/share/backintime/common/backintime.py backup-job
4335 root      39  19 1866092 1,764g   4096 S 5,333 5,664   1:21.01 python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job
5464 root      39  19  129716 114832   4096 S 5,000 0,352   0:05.32 python3 -Es /usr/share/backintime/common/backintime.py backup-job
5466 root      39  19  129748 114576   4096 S 5,000 0,351   0:05.26 python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job
3095 me        20   0 1464276 126100  97488 S 0,667 0,386   0:03.97 /usr/bin/easyeffects --gapplication-service
  36 root      20   0       0      0      0 S 0,333 0,000   0:01.30 [ksoftirqd/3]
  48 root      20   0       0      0      0 S 0,333 0,000   0:01.15 [ksoftirqd/5]


The processes were killed with:

for pid in $(ps -ef | awk '/\/backintime\/common\/qt5_probing\.py/ {print $2}'); do kill -9 $pid; done

About a minute after after killing xorg.bin is at <1% CPU load.

$ top -c
...
PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
5759 root      39  19  158868 147672   4480 D 47,00 0,452   0:26.96 rsync
 2857 me        20   0 4066220 477992 202676 S 6,333 1,464   1:31.78 plasmashell
5813 root      20   0       0      0      0 I 3,667 0,000   0:00.74 kworker/u16:28-btrfs-endio-meta
5801 root      20   0       0      0      0 I 2,333 0,000   0:00.30 kworker/u16:15-btrfs-endio-meta
5835 root      20   0       0      0      0 I 2,000 0,000   0:00.42 kworker/u16:50-btrfs-endio-meta
5863 root      20   0       0      0      0 I 2,000 0,000   0:00.77 kworker/u16:78-btrfs-endio-meta
5822 root      20   0       0      0      0 I 1,333 0,000   0:00.42 kworker/u16:37-btrfs-endio-meta
2286 root      20   0 1572172 129732  88988 S 0,667 0,397  17:37.14 Xorg.bin
3064 me         9 -11  121496  21132   8832 S 0,667 0,065   0:01.50 pipewire
3095 me        20   0 1464276 129300  97488 S 0,667 0,396   0:05.39 easyeffects
  89 root       0 -20       0      0      0 I 0,333 0,000   0:00.11 kworker/0:1H-kblockd
 200 root       0 -20       0      0      0 I 0,333 0,000   0:00.15 kworker/1:1H-kblockd
 201 root       0 -20       0      0      0 I 0,333 0,000   0:00.07 kworker/4:1H-kblockd
 202 root       0 -20       0      0      0 I 0,333 0,000   0:00.18 kworker/7:1H-kblockd

The backup jobs have finished (one rsync is still active), but earlier tests have shown that they not the culprits.

This happened with BiT version 1.4.1 both from YaST (SUSE packet manager), and directly from GitHub.
Python version is 3.11.6.
Operating System: openSUSE Tumbleweed 20231215
KDE Plasma Version: 5.27.10
KDE Frameworks Version: 5.112.0
Qt Version: 5.15.11
Kernel Version: 6.6.3-1-default (64-bit)
Graphics Platform: X11
Processors: 8 × 11th Gen Intel® Core™ i7-1165G7 @ 2.80GHz

To help us diagnose the problem quickly, please provide the output of the console command backintime --diagnostics.

Wellllll..... Could that have a common cause?

$ backintime --diagnostics
Traceback (most recent call last):
File "/usr/share/backintime/common/backintime.py", line 1190, in <module>
startApp()
File "/usr/share/backintime/common/backintime.py", line 507, in startApp
args = argParse(None)
^^^^^^^^^^^^^^
File "/usr/share/backintime/common/backintime.py", line 568, in argParse
args, unknownArgs = mainParser.parse_known_args(args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/argparse.py", line 1902, in parse_known_args
namespace, args = self._parse_known_args(args, namespace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/argparse.py", line 2114, in _parse_known_args
start_index = consume_optional(start_index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/argparse.py", line 2054, in consume_optional
take_action(action, args, option_string)
File "/usr/lib64/python3.11/argparse.py", line 1978, in take_action
action(self, namespace, argument_values, option_string)
File "/usr/share/backintime/common/backintime.py", line 742, in __call__
diagnostics = collect_diagnostics()
^^^^^^^^^^^^^^^^^^^^^
File "/usr/share/backintime/common/diagnostics.py", line 74, in collect_diagnostics
'OS': _get_os_release()
^^^^^^^^^^^^^^^^^
File "/usr/share/backintime/common/diagnostics.py", line 398, in _get_os_release
return osrelease['os-release']
~~~~~~~~~^^^^^^^^^^^^^^
KeyError: 'os-release'

Thanks for reporting. Might be related to #1580 . Aryoda will reply soon.

The problem with --diagnostics is known and fixed upstream but not released yet. You can ignore it.

@noyannus You are running BiT as root (at least the process owner is root)?
Then definitely related to #1580

I still did not manage to set up a Thumbleweed virtual machine with a working Gnome or KDE DE to debug this (working on it, requires manual installation).

@noyannus You are running BiT as root (at least the process owner is root)?

Yes.

I still did not manage to set up a Thumbleweed virtual machine with a working Gnome or KDE DE to debug this (working on it, requires manual installation).

Installer comes as full DVD and as netinstall that downloads as needed. If you only used one of these options, maybe the other works better? (No idea, just speculating).

I have now set up a VirtualBox VM using these instructions and it went surprisingly smooth:

https://techviewleo.com/install-opensuse-tumbleweed-on-virtualbox-vmware-workstation/

@noyannus

I have installed a fresh Tumbleweed 20231219 with

  • KDE Plasma 5.27.10 (same as yours)
  • KDE Frameworks Version 5.113.0 (yours is 5.112.0)
  • Qt Version 5.15.11 (same as yours)
  • Kernel Version 6.6.6-1-default (64-bit) (yours was 6.6.3-1-default)
  • Graphics Platform: X11 (same as yours)
  • Python3 version 3.11.6 (same as yours)

and installed BiT v1.4.1 via yast but could not reproduce the problem (with BiT root): When starting a backup via the BiT Qt GUI

  • the systray icon appears
  • the backups finishes
  • and the systray icon disappears again
  • My CPU load is also normal

qt5_probing.py is used to check if a systray icon can be shown.

Could you please provide me more details how to reproduce the problem, eg.

  • which GPU is installed (may be a Qt5 issue)
  • how do you start the backup (GUI, CLI or scheduled via cron)
  • does it always happen
  • could provide me here an anonymized minimal profile config file that reproduces the problem on your machine
  • which user rights and owner are set on the backup target folder and the included folders/files?

the "problem" is not when starting from GUI as that works fine.
the problem is when starting from a cron job generated by BiT.

As ROOT

I am running a cron job as root (created by BiT) without errors now for 30 minutes (executed every 5 minutes).

Any more ideas what could be different in your setup?

BTW: A temporary workaround would be to deactivate the BiT systray icon be "deleting" the systrayiconplugin.py from the plugin folder:

bit@localhost:/usr/share/backintime/plugins> sudo mv systrayiconplugin.py ..

no idea, but will test your sidestep and report. tks

that worked "this time" for me.
removed systrayiconplugin.py

seems there exists no option to control this "plugin" ??
** perhaps a permissions problem ? **

tks, will monitor and report any further failures. tks

seems there exists no option to control this "plugin" ??

Unfortunately not, this is legacy work and the idea was to separately package plugins to install/uninstall them but every package I know delivers the systray icon plugin always together with the BiT (Qt5) GUI.

** perhaps a permissions problem ? **

Most probably in Qt5 or the Qt5 wrapper in combination with the cron which provides only a reduced environment and perhaps something is missing (but why only on your computer and not in my VM?). We have an open issue with a segfault which may be related (but almost impossible to diagnose: #1095) and this could happen when qt5_probing.py segfaults too and tries to write a core dump which the parent process tries to prevent...

Could you please post the output of

pkexec env DISPLAY=$DISPLAY XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR XAUTHORITY=$XAUTHORITY python3 -c "from PyQt5.QtGui import QIcon; from PyQt5.QtWidgets import QSystemTrayIcon,QApplication; app = QApplication(['']); print('isSystemTrayAvailable: ' + str(QSystemTrayIcon.isSystemTrayAvailable())); print('Theme name: ' + QIcon.themeName()); print('has theme icon <document-save>: ' + str(QIcon.hasThemeIcon('document-save'))); print('themeSearchPaths: ' + str(QIcon.themeSearchPaths())); print('fallbackSearchPaths: ' + str(QIcon.fallbackSearchPaths())); print('fallbackThemeName: ' + str(QIcon.fallbackThemeName()))"

here. I want to check if a non-standard Qt5/KDE theme could cause the trouble for root with cron...

On my VM I get this output:

QStandardPaths: runtime directory '/run/user/1000' is not owned by UID 0, but a directory permissions 0700 owned by UID 1000 GID 1000
isSystemTrayAvailable: True
Theme name: hicolor
has theme icon <document-save>: False
themeSearchPaths: ['/usr/share/icons', ':/icons']
fallbackSearchPaths: ['/usr/share/pixmaps']
fallbackThemeName: hicolor

I only have this problem with root from cron, never from the qt app as root or any occasion as . And do not have it on my server with:
backintime-qt4-1.1.20-3.6.1.noarch
backintime-1.1.20-3.6.1.noarch

# pkexec env DISPLAY=$DISPLAY XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR XAUTHORITY=$XAUTHORITY python3 -c "from PyQt5.QtGui import QIcon; from PyQt5.QtWidgets import QSystemTrayIcon,QApplication; app = QApplication(['']); print('isSystemTrayAvailable: ' + str(QSystemTrayIcon.isSystemTrayAvailable())); print('Theme name: ' + QIcon.themeName()); print('has theme icon <document-save>: ' + str(QIcon.hasThemeIcon('document-save'))); print('themeSearchPaths: ' + str(QIcon.themeSearchPaths())); print('fallbackSearchPaths: ' + str(QIcon.fallbackSearchPaths())); print('fallbackThemeName: ' + str(QIcon.fallbackThemeName()))"
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, xcb.

@aryoda

Could you please provide me more details how to reproduce the problem, eg.

which GPU is installed (may be a Qt5 issue)

The GPU and other graphics info:

# lshw -c display -sanitize
*-display
description: VGA compatible controller
product: TigerLake-LP GT2 [Iris Xe Graphics]
vendor: Intel Corporation
physical id: 2
bus info: pci@0000:00:02.0
logical name: /dev/fb0
version: 01
width: 64 bits
clock: 33MHz
capabilities: pciexpress msi pm vga_controller bus_master cap_list rom fb
configuration: depth=32 driver=i915 latency=0 mode=1920x1080 resolution=2256,1504 visual=truecolor xres=1920 yres=1080
resources: iomemory:600-5ff iomemory:400-3ff irq:145 memory:605c000000-605cffffff memory:4000000000-400fffffff ioport:3000(size=64) memory:c0000-dffff memory:4010000000-4016ffffff memory:4020000000-40ffffffff

how do you start the backup (GUI, CLI or scheduled via cron)

BiT is started for two backups, via anacron every 1 and 4 hours, set in the GUI.

does it always happen

Returns after every boot.

could provide me here an anonymized minimal profile config file that reproduces the problem on your machine

I have appended it below for better readability.

which user rights and owner are set on the backup target folder and the included folders/files?

# find /media/backups/ -maxdepth 5 -ls
  220608      0 drwxrwxrwx   1 root     root           32 Nov 30 16:05 /media/backups/
     256     16 drwxrwxrwx   1 me       me             20 Dez 12 11:30 /media/backups/backup-1
     257      0 drwxrwxrwx   1 root     root           10 Dez 12 11:30 /media/backups/backup-1/backintime
     258      0 drwxrwxrwx   1 root     root            8 Dez 12 11:30 /media/backups/backup-1/backintime/[redacted]
     259      0 drwxr-xr-x   1 root     root            2 Dez 12 11:30 /media/backups/backup-1/backintime/[redacted]/root
     260      0 drwxr-xr-x   1 root     root          254 Dez 22 09:42 /media/backups/backup-1/backintime/[redacted]/root/1
     256     16 drwxrwx---   1 me       me             20 Dez 12 02:14 /media/backups/backup-2
53607347      0 drwxrwxrwx   1 root     root           10 Dez 12 02:14 /media/backups/backup-2/backintime
53607348      0 drwxrwxrwx   1 root     root            8 Dez 12 02:14 /media/backups/backup-2/backintime/[redacted]
53607349      0 drwxr-xr-x   1 root     root            2 Dez 13 02:05 /media/backups/backup-2/backintime/[redacted]/root
53607350      0 drwxr-xr-x   1 root     root          292 Dez 22 09:35 /media/backups/backup-2/backintime/[redacted]/root/2

May also be of interest:

# pkexec env DISPLAY=$DISPLAY XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR XAUTHORITY=$XAUTHORITY ... returns

QStandardPaths: runtime directory '/run/user/1000' is not owned by UID 0, but a directory permissions 0700 owned by UID 1000 GID 1000
isSystemTrayAvailable: True
Theme name: hicolor
has theme icon <document-save>: False
themeSearchPaths: ['/usr/local/share/icons', '/usr/share/icons', ':/icons']
fallbackSearchPaths: ['/usr/share/pixmaps']
fallbackThemeName: hicolor

The only config is /root/.config/backintime/config (no configs in ~/.config/backintime/ or /etc/backintime/)

config.version=6
global.use_flock=false
internal.manual_starts_countdown=-1
profile1.qt.last_path=/
profile1.qt.places.SortColumn=1
profile1.qt.places.SortOrder=0
profile1.qt.settingsdialog.exclude.SortColumn=1
profile1.qt.settingsdialog.exclude.SortOrder=0
profile1.qt.settingsdialog.include.SortColumn=1
profile1.qt.settingsdialog.include.SortOrder=0
profile1.schedule.custom_time=8,12,18,23
profile1.schedule.day=1
profile1.schedule.mode=25
profile1.schedule.repeatedly.period=1
profile1.schedule.repeatedly.unit=10
profile1.schedule.time=0
profile1.schedule.weekday=7
profile1.snapshots.backup_on_restore.enabled=true
profile1.snapshots.bwlimit.enabled=false
profile1.snapshots.bwlimit.value=3000
profile1.snapshots.continue_on_errors=true
profile1.snapshots.copy_links=false
profile1.snapshots.copy_unsafe_links=false
profile1.snapshots.cron.ionice=true
profile1.snapshots.cron.nice=true
profile1.snapshots.cron.redirect_stderr=false
profile1.snapshots.cron.redirect_stdout=true
profile1.snapshots.dont_remove_named_snapshots=true
profile1.snapshots.exclude.1.value=.gvfs
profile1.snapshots.exclude.2.value=.cache/*
profile1.snapshots.exclude.3.value=.thumbnails*
profile1.snapshots.exclude.4.value=.local/share/[Tt]rash*
profile1.snapshots.exclude.5.value=*.backup*
profile1.snapshots.exclude.6.value=*~
profile1.snapshots.exclude.7.value=.dropbox*
profile1.snapshots.exclude.8.value=/proc/*
profile1.snapshots.exclude.9.value=/sys/*
profile1.snapshots.exclude.10.value=/dev/*
profile1.snapshots.exclude.11.value=/run/*
profile1.snapshots.exclude.12.value=/etc/mtab
profile1.snapshots.exclude.13.value=/var/cache/apt/archives/*.deb
profile1.snapshots.exclude.14.value=lost+found/*
profile1.snapshots.exclude.15.value=/tmp/*
profile1.snapshots.exclude.16.value=/var/tmp/*
profile1.snapshots.exclude.17.value=/var/backups/*
profile1.snapshots.exclude.18.value=.Private
profile1.snapshots.exclude.19.value=/home/[redacted]
profile1.snapshots.exclude.20.value=/home/[redacted]
profile1.snapshots.exclude.21.value=* ~
profile1.snapshots.exclude.22.value=* BUP
profile1.snapshots.exclude.23.value=/home/[redacted]
profile1.snapshots.exclude.24.value=/.snapshots
profile1.snapshots.exclude.25.value=/timeshift
profile1.snapshots.exclude.26.value=/*.Trash-0/*
profile1.snapshots.exclude.27.value=/media/*/*
profile1.snapshots.exclude.bysize.enabled=false
profile1.snapshots.exclude.bysize.value=500
profile1.snapshots.exclude.size=27
profile1.snapshots.include.1.type=0
profile1.snapshots.include.1.value=/
profile1.snapshots.include.2.type=0
profile1.snapshots.include.2.value=/home/[redacted]
profile1.snapshots.include.3.type=0
profile1.snapshots.include.3.value=/home/[redacted]
profile1.snapshots.include.4.type=0
profile1.snapshots.include.4.value=/home/[redacted]
profile1.snapshots.include.5.type=0
profile1.snapshots.include.5.value=/home/[redacted]
profile1.snapshots.include.6.type=0
profile1.snapshots.include.6.value=/etc
profile1.snapshots.include.size=6
profile1.snapshots.local.nocache=false
profile1.snapshots.local.password.save=false
profile1.snapshots.local.password.use_cache=true
profile1.snapshots.local_encfs.path=/media/backups/backup-1
profile1.snapshots.log_level=3
profile1.snapshots.min_free_inodes.enabled=true
profile1.snapshots.min_free_inodes.value=5
profile1.snapshots.min_free_space.enabled=true
profile1.snapshots.min_free_space.unit=20
profile1.snapshots.min_free_space.value=100
profile1.snapshots.mode=local
profile1.snapshots.no_on_battery=true
profile1.snapshots.notify.enabled=false
profile1.snapshots.path=/media/backups/backup-1
profile1.snapshots.path.host=frawo
profile1.snapshots.path.profile=1
profile1.snapshots.path.user=root
profile1.snapshots.preserve_acl=true
profile1.snapshots.preserve_xattr=true
profile1.snapshots.remove_old_snapshots.enabled=true
profile1.snapshots.remove_old_snapshots.unit=80
profile1.snapshots.remove_old_snapshots.value=1
profile1.snapshots.rsync_options.enabled=false
profile1.snapshots.rsync_options.value=
profile1.snapshots.smart_remove=true
profile1.snapshots.smart_remove.keep_all=1
profile1.snapshots.smart_remove.keep_one_per_day=7
profile1.snapshots.smart_remove.keep_one_per_month=24
profile1.snapshots.smart_remove.keep_one_per_week=8
profile1.snapshots.smart_remove.run_remote_in_background=false
profile1.snapshots.ssh.check_commands=true
profile1.snapshots.ssh.check_ping=true
profile1.snapshots.ssh.cipher=default
profile1.snapshots.ssh.host=
profile1.snapshots.ssh.ionice=false
profile1.snapshots.ssh.nice=false
profile1.snapshots.ssh.nocache=false
profile1.snapshots.ssh.path=
profile1.snapshots.ssh.port=22
profile1.snapshots.ssh.prefix.enabled=false
profile1.snapshots.ssh.prefix.value=PATH=/opt/bin:/opt/sbin:\$PATH
profile1.snapshots.ssh.private_key_file=
profile1.snapshots.ssh.user=root
profile1.snapshots.take_snapshot_regardless_of_changes=true
profile1.snapshots.use_checksum=false
profile1.snapshots.user_backup.ionice=true
profile2.name=backup2 (4 h)
profile2.qt.last_path=/home/[redacted]
profile2.qt.places.SortColumn=1
profile2.qt.places.SortOrder=0
profile2.qt.settingsdialog.exclude.SortColumn=1
profile2.qt.settingsdialog.exclude.SortOrder=0
profile2.qt.settingsdialog.include.SortColumn=1
profile2.qt.settingsdialog.include.SortOrder=0
profile2.schedule.custom_time=8,12,18,23
profile2.schedule.day=1
profile2.schedule.mode=25
profile2.schedule.repeatedly.period=4
profile2.schedule.repeatedly.unit=10
profile2.schedule.time=0
profile2.schedule.weekday=7
profile2.snapshots.backup_on_restore.enabled=true
profile2.snapshots.bwlimit.enabled=false
profile2.snapshots.bwlimit.value=3000
profile2.snapshots.continue_on_errors=true
profile2.snapshots.copy_links=false
profile2.snapshots.copy_unsafe_links=false
profile2.snapshots.cron.ionice=true
profile2.snapshots.cron.nice=true
profile2.snapshots.cron.redirect_stderr=false
profile2.snapshots.cron.redirect_stdout=true
profile2.snapshots.dont_remove_named_snapshots=true
profile2.snapshots.exclude.1.value=.gvfs
profile2.snapshots.exclude.2.value=.cache/*
profile2.snapshots.exclude.3.value=.thumbnails*
profile2.snapshots.exclude.4.value=.local/share/[Tt]rash*
profile2.snapshots.exclude.5.value=*.backup*
profile2.snapshots.exclude.6.value=*~
profile2.snapshots.exclude.7.value=.dropbox*
profile2.snapshots.exclude.8.value=/proc/*
profile2.snapshots.exclude.9.value=/sys/*
profile2.snapshots.exclude.10.value=/dev/*
profile2.snapshots.exclude.11.value=/run/*
profile2.snapshots.exclude.12.value=/etc/mtab
profile2.snapshots.exclude.13.value=/var/cache/apt/archives/*.deb
profile2.snapshots.exclude.14.value=lost+found/*
profile2.snapshots.exclude.15.value=/tmp/*
profile2.snapshots.exclude.16.value=/var/tmp/*
profile2.snapshots.exclude.17.value=/var/backups/*
profile2.snapshots.exclude.18.value=.Private
profile2.snapshots.exclude.19.value=/home/[redacted]
profile2.snapshots.exclude.20.value=/home/[redacted]
profile2.snapshots.exclude.21.value=* ~
profile2.snapshots.exclude.22.value=* BUP
profile2.snapshots.exclude.23.value=/home/[redacted]
profile2.snapshots.exclude.24.value=/.snapshots
profile2.snapshots.exclude.25.value=/timeshift
profile2.snapshots.exclude.26.value=/*.Trash-0/*
profile2.snapshots.exclude.27.value=/media/*/*
profile2.snapshots.exclude.bysize.enabled=false
profile2.snapshots.exclude.bysize.value=500
profile2.snapshots.exclude.size=27
profile2.snapshots.include.1.type=0
profile2.snapshots.include.1.value=/
profile2.snapshots.include.2.type=0
profile2.snapshots.include.2.value=/home/[redacted]
profile2.snapshots.include.3.type=0
profile2.snapshots.include.3.value=/home/[redacted]
profile2.snapshots.include.4.type=0
profile2.snapshots.include.4.value=/home/[redacted]
profile2.snapshots.include.5.type=0
profile2.snapshots.include.5.value=/home/[redacted]
profile2.snapshots.include.6.type=0
profile2.snapshots.include.6.value=/etc
profile2.snapshots.include.size=6
profile2.snapshots.local.nocache=false
profile2.snapshots.local.password.save=false
profile2.snapshots.local.password.use_cache=true
profile2.snapshots.local_encfs.path=/media/backups/backup-2
profile2.snapshots.log_level=3
profile2.snapshots.min_free_inodes.enabled=true
profile2.snapshots.min_free_inodes.value=5
profile2.snapshots.min_free_space.enabled=true
profile2.snapshots.min_free_space.unit=20
profile2.snapshots.min_free_space.value=100
profile2.snapshots.mode=local
profile2.snapshots.no_on_battery=true
profile2.snapshots.notify.enabled=false
profile2.snapshots.path=/media/backups/backup-2
profile2.snapshots.path.host=frawo
profile2.snapshots.path.profile=2
profile2.snapshots.path.user=root
profile2.snapshots.preserve_acl=true
profile2.snapshots.preserve_xattr=true
profile2.snapshots.remove_old_snapshots.enabled=true
profile2.snapshots.remove_old_snapshots.unit=80
profile2.snapshots.remove_old_snapshots.value=1
profile2.snapshots.rsync_options.enabled=false
profile2.snapshots.rsync_options.value=
profile2.snapshots.smart_remove=true
profile2.snapshots.smart_remove.keep_all=1
profile2.snapshots.smart_remove.keep_one_per_day=7
profile2.snapshots.smart_remove.keep_one_per_month=24
profile2.snapshots.smart_remove.keep_one_per_week=8
profile2.snapshots.smart_remove.run_remote_in_background=false
profile2.snapshots.ssh.check_commands=true
profile2.snapshots.ssh.check_ping=true
profile2.snapshots.ssh.cipher=default
profile2.snapshots.ssh.host=
profile2.snapshots.ssh.ionice=false
profile2.snapshots.ssh.nice=false
profile2.snapshots.ssh.nocache=false
profile2.snapshots.ssh.path=
profile2.snapshots.ssh.port=22
profile2.snapshots.ssh.prefix.enabled=false
profile2.snapshots.ssh.prefix.value=PATH=/opt/bin:/opt/sbin:\$PATH
profile2.snapshots.ssh.private_key_file=
profile2.snapshots.ssh.user=root
profile2.snapshots.take_snapshot_regardless_of_changes=true
profile2.snapshots.use_checksum=false
profile2.snapshots.user_backup.ionice=true
profiles=1:2
profiles.version=1
qt.last_path=/
qt.logview.height=690
qt.logview.width=1444
qt.main_window.files_view.date_width=100
qt.main_window.files_view.name_width=100
qt.main_window.files_view.size_width=100
qt.main_window.files_view.sort.ascending=true
qt.main_window.files_view.sort.column=0
qt.main_window.height=1012
qt.main_window.main_splitter_left_w=211
qt.main_window.main_splitter_right_w=632
qt.main_window.second_splitter_left_w=230
qt.main_window.second_splitter_right_w=383
qt.main_window.width=847
qt.main_window.x=1698
qt.main_window.y=0
qt.show_hidden_files=false

does it always happen

Returns after every boot.

Ah! Does the anchron BiT job hang if it is started after a reboot but before the first user is logged in with a desktiop environment like KDE Plasma (establishing an Xorg session)? Anachron does directly start overdue jobs...

This could also explain why the old implementation of checking for systray icon support using xdpinfo failed (hung) as reported in #1580...

I am trying to test scenario this in my VM...

Notes:

  • In #1580 the user reported the same problem for Wayland so it is not only related to X11
  • Since Qt5's internal implementation of checking if a systray icon is allowed uses a (user) DBus call and without a started desktop environment no (user) DBus exists this may be blocking or retrying forever...

note that xdpyinfo does not appear to be present when my instance hangs.

do not know what has changed but my root instance started by cron is again hanging:

(lines will wrap):
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 30470 11.5 35.6 13240528 13184432 ? RN 07:00 19:48 python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job
root 30471 35.2 0.1 190992 45364 ? SNl 07:00 60:22 _ /usr/bin/python3 /usr/share/backintime/common/qt5_probing.py

note, maybe because I renamed rather than deleting the systrayplugin. have now deleted them.

do not know what has changed but my root instance started by cron is again hanging:
note, maybe because I renamed rather than deleting the systrayplugin. have now deleted them.

renaming does not work, only moving into another folder or deleting it (since all *.py files in the plugin folder will be loaded in BiT)

I could reproduce it by booting without logging in while a BiT root cron job starts every 5 minutes (no need to use anacron.

100 % cpu and a few dozens of hanging processes:

bit@localhost:~> ps ax | grep -i backintime
 1345 ?        SN     2:31 python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job
 1347 ?        S      0:00 /usr/bin/python3 -Es /usr/share/backintime/qt/serviceHelper.py
 1352 ?        RNl    6:13 /usr/bin/python3 /usr/share/backintime/common/qt5_probing.py
 1401 ?        RN     1:59 python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job
 1402 ?        SNl    4:42 /usr/bin/python3 /usr/share/backintime/common/qt5_probing.py
...

THX a lot for your very helpful kill script in your first post ;-)

Now I have to find a way to debug this without having a debugging GUI in my VM and simulating a started cron job...

@aryoda

I could reproduce it by booting without logging in while a BiT root cron job starts every 5 minutes (no need to use anacron.

100 % cpu and a few dozens of hanging processes:

Strange. I rebooted to CLI, logged in as root and waited for ~15min. The only python processes were a backup-job after ~2min and later another. Seems legit; I have two backup profiles active.

If you want me to provide additional info or do some tests, tell me soon. I plan to change to a LVM setup during the holidays and that may entail a fresh system setup (have not researched the procedure yet).

THX a lot for your very helpful kill script in your first post ;-)

Save it somewhere; the structure is great for anything that needs checking at regular intervals. With a sound alarm (alsabat) I use it to get noticed when my Internet is alive again (ping instead of ps of course).

I've made it into a login script:
edit: corrected an error.

#!/usr/bin/bash

# kills backintime probing processes
# /usr/share/backintime/common/backintime.py


wait=5   # wait minutes

while true; do # start loop again after killing processes and waiting time

    for pid in $(ps -ef | awk '/\/backintime\/common\/qt5_probing\.py/ {print $2}'); do
        kill -9 $pid;
    done

    sleep $(($wait * 60))

done

# for pid in $(ps -ef | awk '/\/backintime\/common\/qt5_probing\.py/ {print $2}'); do kill -9 $pid; done

I rebooted to CLI, logged in as root and waited for ~15min. The only python processes were a backup-job after ~2min and later another. Seems legit; I have two backup profiles active.

I can even provoke the problem by switching to a terminal in the login screen after booting (via Ctrl+Alt+F1 or Host-Key+F1 in my VM) and manually starting a backup with backintime --profile-id 1 backup-job.
Then I can see the hanging qt5_probing.py process with ps ax | grep -i backintime...

If you want me to provide additional info or do some tests, tell me soon.
I plan to change to a LVM setup during the holidays and that may entail a fresh system setup

I can reproduce the problem now and once I fix it it would be great if could test it again
(just depends on who is faster - me with a fix or you with a new system installation.
But no pressure from my side, holidays are the perfect time to do such maintenance work).

I can reproduce the problem now and once I fix it it would be great if could test it again
(just depends on who is faster - me with a fix or you with a new system installation.
But no pressure from my side, holidays are the perfect time to do such maintenance work).

OK. See on the other side. :-)

Oh-- and have great holidays!

Short update: Via journalctl --since=today | grep -i backintime I can clearly see that the dbus-daemon tries to establish remote call to the dbus name org.freedesktop.portal.Desktop triggered by qt5_probing.py:

image

I have step-wise removed code from qt5_probing.py and discovered that this line of code is causing the process to hang (and produce above dbus output):

app = QApplication([''])

So I have to find a way now to

  • either make this QApplication instantiation work
  • or avoid calling it (requires recognizing that no desktop environment is running)

Edit: Compared to direct call in a root shell via python3 qt5_probing.py the call via BiT injects different environment variables:

image

The most significant difference is that DISPLAY: :0 is injected and this may be the reason why Qt5 assumes there is a running desktop environment (even though there is none).

I can see in the BiT code that the X11 env var DISPLAY: :0 is set here (why? it is given by logging in into a DE):

if not os.getenv('DISPLAY', ''):
os.putenv('DISPLAY', ':0.0')

This code was added 14 years ago:

https://github.com/bit-team/backintime/blame/ba7439c32efe5d26bfc2afb1320bf734703cc3ef/kde4/plugins/kde4plugin.py#L31-L38

Fresh install from
openSUSE-Tumbleweed-DVD-x86_64-Snapshot20231222-Media.iso (has been
openSUSE-Tumbleweed-DVD-x86_64-Snapshot20231106-Media.iso before). Other main difference is a change from partition layout to LVM, but that's unlikely to play a role here.

Re display:

localhost% echo $DISPLAY
:0

There are still several BiT python processes, but all consume very little CPU if any, and xorg is well behaved, too.

Filtered for xorg|python:

image

Looks like they start after/with the first backup jobs, then stop multiplying.

image

I could dig out relevant files from the backups and do a diff with the current version, if that helps.

Looks like they start after/with the first backup jobs, then stop multiplying.

What surprises me (beyond the known problem with the hanging qt5_probing.py) is that there are ten or more BiT GUIs started (app.py). Did you start the BiT GUI manually or somehow automated?

I could dig out relevant files from the backups and do a diff with the current version, if that helps.

THX, currently I don't need that.

I am currently trying to find a solution since I can reproduce the hanging process...

Did you start the BiT GUI manually or somehow automated?

GUI only manually.


There's something that may be related re Tumbleweed's dbus. I spend the whole yesterday with it cursing and screaming happily learning. Balked at the idea to introduce non-standard links and looked for better solutions (unsuccessful).

They have an unusual setup: qdbus is libqt5-qdbus.
You can see in YaST Software (search dbus and qdbus) what is where and under what name.

Some say a symbolic link would be the solution (like, e.g., ln -s /usr/lib64/qt5/bin/qdbus /usr/bin/qdbus), or changing the $PATH

  • an applet dev in KDE store),

    NOTE: openSUSE Tumbleweed users for right work need create symlink to qdbus:
    sudo ln -s ../lib64/qt5/bin/qdbus /usr/bin/qdbus

  • guy on reddit with a global idea (but 4yrs ago!)

    I’ve actually created symlinks of many of the Qt5 programs without the -qt5 part to /usr/local/bin so that I don’t have to remember this, like /usr/local/bin/lrelease → /usr/bin/lrelease-qt5. "

  • one on SUSE Forum(2016)

    Search for “qtpaths --binaries-dir” and replace “qtpaths” with “/usr/bin/qtpaths” there.
    I have the latest version 5.8.3 installed, where the line numbers likely changed.
    But again, this and the previous change are only necessary if you want to keep the PATH= in ~/.bashrc. And only one of them would be necessary either.

  • or in Arch Forum likewise

  • and here a dev considers an additional command to mitigate. That may be your solution, too, I hope -- that would be easy. :-)

HTH

One more idea.

qdbus called as root:

localhost:~ # qdbus
Could not connect to D-Bus server: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.

qdbus called as user:

localhost% qdbus
:1.1
org.freedesktop.systemd1
:1.10
org.kde.KWin
org.kde.KWin.Effect.WindowView1
org.kde.KWin.ScreenShot2
org.kde.NightColor
org.kde.kappmenuview
:1.12
org.kde.kglobalaccel
:1.122
org.freedesktop.secrets
org.kde.kwalletd5
:1.13
:1.14
ca.desrt.dconf
:1.15
com.canonical.Unity
org.freedesktop.Notifications
... long list of connections ...

IF this is a qdbus caused problem, this would match the observation that only root is affected, as @ptilopteri emphasized.

I have noticed absolutely no problem scheduling and performing using the gui as , only as with cron. using the gui and no cron shows no problem.

only problem noticed when scheduled with cron by .

If spent much time to find out the reason for this problem and stop now (because I had to debug the X11 client/server communication) but I know now how it happens and how to fix it.

Symptoms

  • Root BiT backup job hangs when started via cron
  • Xorg.bin X11 server process has high CPU usage
  • RAM is filling

How to reproduce

Reboot and log in into terminal (at the login window press Ctrl + ALT + F1) as user without starting a Desktop Environment (Gnome, KDE...)!

You can reproduce the problem with one of the following commands (simulating a cron job - the DISPLAY environment variable is not set like in cron!)

su -   # make me root (like cron - the DISPLAY env var is not set)
backintime --profile-id 1 backup-job  # try to start a backup for the main profile

BiT will exit but leaves a hanging process:

ps ax | grep -i backintime
24649 ?     Sl 0:01 /usr/bin/python3 /usr/share/backintime/common/qt5_probing.py

Minimal reproducible example

Start the BiT code that causes the problem (still in above root shell without a running desktop environment):

cd /usr/share/backintime/common
env DISPLAY:=0.0 QT_LOGGING_RULES="qt.*=true" QT_FORCE_STDERR_LOGGING=1 python3 qt5_probing.py

What happens then:

  1. Nothing happens ;-) The process does not end and blocks the terminal

  2. Press Ctrl + C (to send SIGINT interrupt to kill the process - in fact it is not killed completely but no you can see the output)

  3. Then you will see repeated output in the terminal saying:

    Authorization required, but no authorization protocol specified

  4. Stop with: Strg+Z (put process into background)

  5. Enter ps ax | grep -i qt5_probing

  6. Kill the hanging process with kill -9 <PID> to enforce the kill since the process is otherwise unstoppable

  7. fg is not required now I guess

Note: The same "Authorization required..." error is shown when you try to run other X11 commands in the root shell, eg.

env DISPLAY:=0.0 xdpyinfo

which explains why older BiT versions that used xdpyinfo to check for an existing X11 server did also hang ;-)

Even

env DISPLAY:=0.0 xhost +local

shows the same error!

Reason for the hanging qt5_probing.py process

I have step-wise removed code from qt5_probing.py and discovered that this line of code is causing the process to hang (and produce above dbus output):

app = QApplication([''])

The QApplication constructor hangs most probably when it tries to establish a connection to the X11 server (XOpenDisplay() of xlib.h?).
This endlessly prints above "authorization required..." error message and
blocks the main thread (Qt5 GUI code must be executed
in the main thread) .

It is unclear if the error message is printed by the X11 server or the Qt5 client code (would need more time to investigate) but I could not find the error message text in the source code of Qt5 or the xcb plugin for X11 so I assume it is a problem of the X11 server (also because xdpyinfo shows the same error message without containing it in the source code and the first X11 lib call in the source code is XOpenDisplay().

Note: In cron jobs the display variable is not set at all by default but
BiT sets the X11 display env var to the default :0.0 if none is set ("hard-coded" trying to use first display as "best guess"!):

if not os.getenv('DISPLAY', ''):
os.putenv('DISPLAY', ':0.0')

if not os.getenv('DISPLAY', ''):
os.putenv('DISPLAY', ':0.0')

I have looked into the Git history why the display variable is set in the above code (if not already set)
and found out that all (non-cosmetical) changes are quite old (date back to 2009) to fix this
reported bug:

BiT systray icon causes crash when screen is locked

The code causing this bug is based on PyKDE4 for Python 2.5!

The applied solution came from PyGTK: How to display a systray icon from a cronjob
where exactly setting DISPLAY to :0.0 was recommended (if not set).

How to fix this bug

  1. It does NOT work to remove above code that sets the DISPLAY env var when none is set.

    The systray icon would then no longer be shown when BiT is started via a root cron job

  2. Use a client-side "timeout" to kill the qt5_probing.py sub process (if it does not end "in time" it may hang)

    The easiest fix is here:

    backintime/common/tools.py

    Lines 709 to 715 in 1b9e3b3

    try:
    path = os.path.join(backintimePath("common"), "qt5_probing.py")
    cmd = [sys.executable, path]
    with subprocess.Popen(cmd,
    stdout=subprocess.PIPE,
    stderr=subprocess.PIPE,
    universal_newlines=True) as proc:

    Just modify the sub process command line with the timeout command (which is installed by default on almost every distro via the GNU core utilities package coreutils):

    cmd = ["timeout", "30s", sys.executable, path]
    

    timeout exits with status 124 if the timeout was reached and kills the process (see man timeout).

  3. Alternative fix: Apply the sub process execution timeout via Python code

    Code pattern (in the location as in the prev. described fix):

    proc = subprocess.Popen(...)
    try:
        outs, errs = proc.communicate(timeout=15)
    except TimeoutExpired:
        proc.kill()
        outs, errs = proc.communicate()
    

    This would be a more robust solution since

    • it does not depend on another external command (timeout) which is another dependency
    • the error handling incl. logging is under full control of BiT

Update: I have applied above fix and now backintime (as root) hangs when trying to establish a connection to the user (session) dbus because root has none (or no rights?):

bus = dbus.SessionBus()

The best explanation for this problem may be this: https://stackoverflow.com/questions/71425861/connecting-to-user-dbus-as-root

This can be reproduced with this MRE:

# Precond: Desktop environment with X11/xorg installed
# Reboot into terminal and log in as user
echo $DBUS_SESSION_BUS_ADDRESS   # eg. unix:path/run/user/100/bus 
su -
echo $DBUS_SESSION_BUS_ADDRESS   # is not set in my case
# https://dbus.freedesktop.org/doc/dbus-launch.1.html
# If DBUS_SESSION_BUS_ADDRESS is not set for a process that tries to use D-Bus,
# by default the process will attempt to invoke dbus-launch with the --autolaunch option to start up a new session bus
# or find the existing bus address on the X display or in a file in ~/.dbus/session-bus/
env DISPLAY=:0.0 python3
>>> import dbus
>>> bus = dbus.SessionBus()  # hangs forever !!!
# Note: If DISPLAY env var is not set the above call throws an execption:
# DBusException: org.freedesktop.DBus.Error.NotSupported: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11

To fix this prevent dbus session bus connections when BiT runs as root. This is already mentioned in the code but the implementation does first try to connect to dbus and then (sic!) checks if BiT is running as root which may theoretically have worked sometimes:

backintime/common/tools.py

Lines 1648 to 1650 in 1b9e3b3

if isRoot():
logger.debug("Inhibit Suspend failed because BIT was started as root.")
return

To fix this check for root first and return (without trying to connect to the user session dbus).

Does the bug also happen with Wayland instead of X11

TODO Most probably yes (at least reported by a user) and since the backintime CLI command uses XWayland most probably the authorization problem may be the same...

Next steps

  1. TODO: Do more tests with Wayland and other distros (does the bug occur on different setups?)
  2. TODO: Prepare fix
  3. TODO: Find related issues (xdpyinfo, "no systray icon for root when started via cron"...)
  4. TODO: Search related issues in xorg issues

@noyannus @ptilopteri

It wasn't dbus, the problem occured even earlier (when trying to open the X11 display).

Could you please test an easy "hotfix" (just one line of code)?

  1. Open the file /usr/share/backintime/common/tools.py as sudo

  2. Command out this line at about line 700 (function is_Qt5_working() using a leading "# "
    cmd = [sys.executable, path]
    and insert this line after above code (with the same indention = number of leading spaces as before):
    cmd = ["timeout", "30s", sys.executable, path]

    The code must look then like this:

       try:
            path = os.path.join(backintimePath("common"), "qt5_probing.py")
            # cmd = [sys.executable, path]
            cmd = ["timeout", "30s", sys.executable, path]
            with subprocess.Popen(cmd,
                                  stdout=subprocess.PIPE,
                                  stderr=subprocess.PIPE,
                                  universal_newlines=True) as proc:
    
  3. Check if the systray icon is still working (= code fixed correctly) by running backup via GUI

reinstalled backintime/backintime-qt
made requested changes to /usr/share/backintime/common/tools.py as

tried to start gui, failed something to the effect: could not start systrayicon

currently waiting for added cron job text to start

note: no icon appears for me in the systray anyway now or before.

tried to start gui, failed something to the effect: could not start systrayicon

Looks like a syntax error in the code.

I think I will prepare a fixed version and also implement a Python-side timeout (without the timeout command since I have discovered hanging backintime processes now - still debugging this).

note: no icon appears for me in the systray anyway now or before.

Never in the unmodified BiT version (neither GUI nor CLI, neither as user nor as root?)?

Looks like an extra problem...

Holy Moly, that was a textbook example of a rabbit hole! Kudos.

Could you please test an easy "hotfix" (just one line of code)?
...
Check if the systray icon is still working (= code fixed correctly) by running backup via GUI

Yes, it works. BiT launched as root, manually started two backups => two BiT icons appear in the systray.

note: no icon appears for me in the systray anyway now or before.

Never in the unmodified BiT version (neither GUI nor CLI, neither as user nor as root?)?

Looks like an extra problem...

no, I don't see it currently, nor previously for quite some time.

note: no icon appears for me in the systray anyway now or before.
no, I don't see it currently, nor previously for quite some time.

If your backintime --version is 1.4.x could you please provide me the output of starting a backup in the console with --debug added so I can see if and how the systray icon support is recognized by Qt5? THX!

backintime --version
backintime 1.4.1

python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job --debug

DEBUG: [common/backintime.py:589 argParse] Arguments: {'debug': True, 'profile_id': 2, 'command': 'backup-job', 'func': <function backupJob at 0x7fb3d9a320c0>} | unknownArgs: []
DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 20853
DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 0
DEBUG: [common/tools.py:2592 BackupJobDaemon.daemonize] decouple from parent environment
DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 20854
DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 0
DEBUG: [common/tools.py:2609 BackupJobDaemon.daemonize] redirect standard file descriptors

Back In Time
Version: 1.4.1

Back In Time comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions; type `backintime --license' for details.

note:
/usr/share/backintime/plugins/systrayiconplugin.py
not present

with "systrayiconplugin.py" present:

python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job --debug
DEBUG: [common/backintime.py:589 argParse] Arguments: {'debug': True, 'profile_id': 2, 'command': 'backup-job', 'func': <function backupJob at 0x7fa07cb9a0c0>} | unknownArgs: []
DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 21198
DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 0
DEBUG: [common/tools.py:2592 BackupJobDaemon.daemonize] decouple from parent environment
DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 21199
DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 0
DEBUG: [common/tools.py:2609 BackupJobDaemon.daemonize] redirect standard file descriptors

Back In Time
Version: 1.4.1

USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 21199 0.0 0.0 31880 17060 ? S 11:48 0:00 _ python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job --debug
root 21200 0.2 0.1 188388 44900 ? Sl 11:48 0:00 _ /usr/bin/python3 /usr/share/backintime/common/qt5_probing.py

note: /usr/share/backintime/plugins/systrayiconplugin.py not present

sorry, I forgot that backup-job spawns a new process so that the debug output is no contained in the same terminal.

To get the full debug logging output could you please start your job with

python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job --debug

and then after a about a minute query the syslog with:

journalctl --since "2 minutes ago" | grep -i backintime

and send me the relevant part of the log...

PS: If you kill the backintime process you should also kill the left-over lock file:

rm /root/.local/share/backintime/worker*.lock

Otherwise the backup will probably not start (but you will see a message then)

after 65 seconds:

Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24091]: DEBUG: [common/backintime.py:589 argParse] Arguments: {'debug': True, 'profile_id': 2, 'command': 'backup-job', 'func': <function backupJob at 0x7f24cbd9a0c0>} | unknownArgs: []
Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24091]: DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 24092
Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24092]: DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 0
Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24092]: DEBUG: [common/tools.py:2592 BackupJobDaemon.daemonize] decouple from parent environment
Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24092]: DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 24093
Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 0
Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: DEBUG: [common/tools.py:2609 BackupJobDaemon.daemonize] redirect standard file descriptors
Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: Main profile(1) :: DEBUG: [common/configfile.py:591 Config.setCurrentProfile] Change current profile: 1=Main profile
Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: Main profile(1) :: DEBUG: [common/tools.py:174 initiate_translation] No language code. Use systems current locale.
Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: Main profile(1) :: DEBUG: [common/backintime.py:677 getConfig] config file: /root/.config/backintime/config
Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: Main profile(1) :: DEBUG: [common/backintime.py:678 getConfig] share path: /root/.local/share/backintime
Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: Main profile(1) :: DEBUG: [common/backintime.py:679 getConfig] profiles: 1=Main profile, 5=dent, 2=tumbleweed
Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: tumbleweed(2) :: DEBUG: [common/configfile.py:591 Config.setCurrentProfile] Change current profile: 2=tumbleweed
Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: tumbleweed(2) :: DEBUG: [common/pluginmanager.py:245 PluginManager.load] Register plugin path /usr/share/backintime/plugins
Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: tumbleweed(2) :: DEBUG: [common/pluginmanager.py:262 PluginManager.load] Add plugin usercallbackplugin.py

@ptilopteri Could you please requery the journalctl log again I think it may have been too early (the pluginmanager should load more plugins -> also the systray plugin: Add plugin systrayiconplugin.py). In my case it looks like this:

Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [common/backintime.py:679 getConfig] profiles: 1=Main profile, 2=profile 2, 3=test2config, 4=blue_usb (ext) test profile, 5=ssh-hello-world, 6=issue 851 (cross-device symlinks), 7=encFS test
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [common/configfile.py:591 Config.setCurrentProfile] Change current profile: 1=Main profile
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [common/pluginmanager.py:245 PluginManager.load] Register plugin path /usr/share/backintime/plugins
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [common/tools.py:719 is_Qt5_working] Qt5 probing result: exit code 2
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [plugins/systrayiconplugin.py:76 init] System tray is available to show the BiT system tray icon
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [common/pluginmanager.py:262 PluginManager.load] Add plugin systrayiconplugin.py
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [common/pluginmanager.py:262 PluginManager.load] Add plugin usercallbackplugin.py
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [common/pluginmanager.py:262 PluginManager.load] Add plugin notifyplugin.py
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: INFO: [common/snapshots.py:729 Snapshots.backup] Lock
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [common/tools.py:1623 inhibitSuspend] Inhibit S

Or is it not installed? ls /usr/share/backintime/plugins/ should show three files (one of them being systrayiconplugin.py)

l /usr/share/backintime/plugins/
total 32
drwxr-xr-x 3 root root 4096 Dec 29 11:47 ./
drwxr-xr-x 6 root root 4096 Nov 13 16:42 ../
drwxr-xr-x 2 root root 4096 Dec 22 10:04 pycache/
-rw-r--r-- 1 root root 1925 Nov 13 16:42 notifyplugin.py
-rw-r--r-- 1 root root 4263 Nov 13 16:42 systrayiconplugin.py
-rw-r--r-- 1 root root 4647 Nov 13 16:42 usercallbackplugin.py

after 1min 55 sec:

journalctl --since "2 minutes ago" | grep -i backintime
Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26040]: DEBUG: [common/backintime.py:589 argParse] Arguments: {'debug': True, 'profile_id': 2, 'command': 'backup-job', 'func': <function backupJob at 0x7fae701de0c0>} | unknownArgs: []
Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26040]: DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 26041
Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26041]: DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 0
Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26041]: DEBUG: [common/tools.py:2592 BackupJobDaemon.daemonize] decouple from parent environment
Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26041]: DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 26042
Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 0
Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: DEBUG: [common/tools.py:2609 BackupJobDaemon.daemonize] redirect standard file descriptors
Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: Main profile(1) :: DEBUG: [common/configfile.py:591 Config.setCurrentProfile] Change current profile: 1=Main profile
Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: Main profile(1) :: DEBUG: [common/tools.py:174 initiate_translation] No language code. Use systems current locale.
Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: Main profile(1) :: DEBUG: [common/backintime.py:677 getConfig] config file: /root/.config/backintime/config
Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: Main profile(1) :: DEBUG: [common/backintime.py:678 getConfig] share path: /root/.local/share/backintime
Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: Main profile(1) :: DEBUG: [common/backintime.py:679 getConfig] profiles: 1=Main profile, 5=dent, 2=tumbleweed
Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: tumbleweed(2) :: DEBUG: [common/configfile.py:591 Config.setCurrentProfile] Change current profile: 2=tumbleweed
Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: tumbleweed(2) :: DEBUG: [common/pluginmanager.py:245 PluginManager.load] Register plugin path /usr/share/backintime/plugins
Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: tumbleweed(2) :: DEBUG: [common/pluginmanager.py:262 PluginManager.load] Add plugin usercallbackplugin.py

@ptilopteri Thanks for your patience :-)

Strange. Perhaps the grep -i backtintime is to restrictive (does journalctl | grep -i systrayiconplugin find anything?).
I will look into the source code to find out under which conditions the systrayplugin does not appear in the log...

Edit: The code does not help to find the reason why two plugins are not loaded (too much if nestings without logging output)... I have added more logging to the code for the next release...

python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job --debug
DEBUG: [common/backintime.py:589 argParse] Arguments: {'debug': True, 'profile_id': 2, 'command': 'backup-job', 'func': <function backupJob at 0x7f8ee60c60c0>} | unknownArgs: []
DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 31338
DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 0
DEBUG: [common/tools.py:2592 BackupJobDaemon.daemonize] decouple from parent environment
DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 31339
DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 0
DEBUG: [common/tools.py:2609 BackupJobDaemon.daemonize] redirect standard file descriptors

Back In Time
Version: 1.4.1

journalctl --since "5 minutes ago" | grep -i backintime
Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31337]: DEBUG: [common/backintime.py:589 argParse] Arguments: {'debug': True, 'profile_id': 2, 'command': 'backup-job', 'func': <function backupJob at 0x7f8ee60c60c0>} | unknownArgs: []
Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31337]: DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 31338
Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31338]: DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 0
Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31338]: DEBUG: [common/tools.py:2592 BackupJobDaemon.daemonize] decouple from parent environment
Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31338]: DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 31339
Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 0
Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: DEBUG: [common/tools.py:2609 BackupJobDaemon.daemonize] redirect standard file descriptors
Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: Main profile(1) :: DEBUG: [common/configfile.py:591 Config.setCurrentProfile] Change current profile: 1=Main profile
Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: Main profile(1) :: DEBUG: [common/tools.py:174 initiate_translation] No language code. Use systems current locale.
Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: Main profile(1) :: DEBUG: [common/backintime.py:677 getConfig] config file: /root/.config/backintime/config
Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: Main profile(1) :: DEBUG: [common/backintime.py:678 getConfig] share path: /root/.local/share/backintime
Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: Main profile(1) :: DEBUG: [common/backintime.py:679 getConfig] profiles: 1=Main profile, 5=dent, 2=tumbleweed
Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: tumbleweed(2) :: DEBUG: [common/configfile.py:591 Config.setCurrentProfile] Change current profile: 2=tumbleweed
Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: tumbleweed(2) :: DEBUG: [common/pluginmanager.py:245 PluginManager.load] Register plugin path /usr/share/backintime/plugins
Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: tumbleweed(2) :: DEBUG: [common/pluginmanager.py:262 PluginManager.load] Add plugin usercallbackplugin.py

from a previous (last month) invocation:

Nov 17 08:09:02 crash.wahoo.no-ip.org backintime[19194]: tumbleweed(2) :: ERROR: Failed to load plugin systrayiconplugin.py: 0

What is the output of this snippet (should be True otherwise the file is corrupt)?

cd /usr/share/backintime/common
python3 -c "import sys; sys.path.insert(0, '/usr/share/backintime/plugins/'); import systrayiconplugin; p = systrayiconplugin.SysTrayIconPlugin(); import snapshots; s = snapshots.Snapshots(); print(p.init(s))"

FYI: I have pushed a draft fix for the hanging processes (will test it in different scenarios the next days):
aryoda@b84568c

output of snippet is "True"

Mind if I unsubscribe now? I feel there is little I can add at this point, and it's cluttering my inbox.

A good and successful 2024 to you all.

Mind if I unsubscribe now? I feel there is little I can add at this point, and it's cluttering my inbox.

Please let me know if you find out how to do it. To my experience unsubscribing won't make an effect because you opened the Issue. Microsoft GitHub will bother you with your own issue no matter if you are subscribed or not.

A good and successful 2024 to you all.

To you, too. Thanks for reporting.

The 'unsubscribe' link in the email led to this reaction:

image

We shall see if they keep their word.

Thanks for reporting.

The gratitude is on my side -- it is people like you who provide quality software that everybody can afford.

output of snippet is "True"

OK, I have no clue and need the improved logging output of my upcoming PR.
I suggest to wait until my bug fix is merged, test it with this version and then open a new issue if the systray icon does still not work. That way we keep the issues separated.

I know not why but after reinstalling backintime and backintime-qt and removing systrayiconplugin.py, the timed start of backintime by via cron no longer works. The instance is there, pid, but just sits idle. GUI still works fine.

I know not why but after reinstalling backintime and backintime-qt and removing systrayiconplugin.py, the timed start of backintime by via cron no longer works. The instance is there, pid, but just sits idle. GUI still works fine.

After fixing the systray icon bug I discovered that the subsequent user session DBus call of BiT did block too for root when no user is logged in. I have added a description of this "shadowed" bug to the "how to fix this bug" section in my above bug analysis

Why it sometimes works and sometimes not is still unclear to me and more tests are in my TODO list (esp. if the same happens on other distros).

You could try to replace the installed /usr/share/backintime/common/tools.py (rename it to *.bak) with my fixed file that you can download here (... > View File > Download raw file):

aryoda@b84568c#diff-aefcaf4a21864b54ca4e7ccb0febb72a9400694db77381e505cc23433f1bb729

If this doesn't work please

  • add --debug to the the BiT cron job(s) (via sudo crontab -e)
  • reboot
  • wait for the cron job to start
  • log in
  • and send me the journalctl --since "30 minutes ago" | grep -i backintime
    here so that I can check the

Edit: I forgot to ask: From which source did you re-install BiT? OpenSuse repo, from source code here or from my cloned repo with the fixed code?

I know not why but after reinstalling backintime and backintime-qt and removing systrayiconplugin.py, the timed start of backintime by via cron no longer works. The instance is there, pid, but just sits idle. GUI still works fine.

After fixing the systray icon bug I discovered that the subsequent user session DBus call of BiT did block too for root when no user is logged in. I have added a description of this "shadowed" bug to the "how to fix this bug" section in my above bug analysis

Why it sometimes works and sometimes not is still unclear to me and more tests are in my TODO list (esp. if the same happens on other distros).

You could try to replace the installed /usr/share/backintime/common/tools.py (rename it to *.bak) with my fixed file that you can download here (... > View File > Download raw file):

aryoda@b84568c#diff-aefcaf4a21864b54ca4e7ccb0febb72a9400694db77381e505cc23433f1bb729

replaced "tools.py" with your code and set cron job which successfully started.
waiting for it to finish.

completed successfully.

tks,

I am unable to open a new issue and this one is prematurely closed. So here is the problem and relevant information:

To help us diagnose the problem quickly, please provide the output of the console command backintime --diagnostics.

# backintime --diagnostics
Traceback (most recent call last):
  File "/usr/share/backintime/common/backintime.py", line 1190, in <module>
    startApp()
  File "/usr/share/backintime/common/backintime.py", line 507, in startApp
    args = argParse(None)
           ^^^^^^^^^^^^^^
  File "/usr/share/backintime/common/backintime.py", line 568, in argParse
    args, unknownArgs = mainParser.parse_known_args(args)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/argparse.py", line 1902, in parse_known_args
    namespace, args = self._parse_known_args(args, namespace)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/argparse.py", line 2114, in _parse_known_args
    start_index = consume_optional(start_index)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/argparse.py", line 2054, in consume_optional
    take_action(action, args, option_string)
  File "/usr/lib64/python3.11/argparse.py", line 1978, in take_action
    action(self, namespace, argument_values, option_string)
  File "/usr/share/backintime/common/backintime.py", line 742, in __call__
    diagnostics = collect_diagnostics()
                  ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/share/backintime/common/diagnostics.py", line 74, in collect_diagnostics
    'OS': _get_os_release()
          ^^^^^^^^^^^^^^^^^
  File "/usr/share/backintime/common/diagnostics.py", line 398, in _get_os_release
    return osrelease['os-release']
           ~~~~~~~~~^^^^^^^^^^^^^^
KeyError: 'os-release'

"os-release" is :

NAME="openSUSE Tumbleweed"
# VERSION="20240117"
ID="opensuse-tumbleweed"
ID_LIKE="opensuse suse"
VERSION_ID="20240117"
PRETTY_NAME="openSUSE Tumbleweed"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:opensuse:tumbleweed:20240117"
BUG_REPORT_URL="https://bugzilla.opensuse.org"
SUPPORT_URL="https://bugs.opensuse.org"
HOME_URL="https://www.opensuse.org"
DOCUMENTATION_URL="https://en.opensuse.org/Portal:Tumbleweed"
LOGO="distributor-logo-Tumbleweed"

Additionally, please specify as precisely as you can the package or installation source where you got Back In Time from. Sometimes there are multiple alternatives, like in for Arch-based distros.

http://cdn.opensuse.org/tumbleweed/repo/oss

Additionally, please specify as precisely as you can the package or installation source where you got Back In Time from. Sometimes there are multiple alternatives, like in for Arch-based distros.

 rpm -qi backintime
Name        : backintime
Version     : 1.4.1
Release     : 2.1
Architecture: noarch
Install Date: Fri 19 Jan 2024 09:11:03 AM EST
Group       : Productivity/Archiving/Backup
Size        : 809796
License     : GPL-2.0-or-later
Signature   : RSA/SHA512, Wed 17 Jan 2024 04:24:42 PM EST, Key ID 35a2f86e29b700a4
Source RPM  : backintime-1.4.1-2.1.src.rpm
Build Date  : Wed 17 Jan 2024 04:24:21 PM EST
Build Host  : i02-ch2d
Packager    : https://bugs.opensuse.org
Vendor      : openSUSE
URL         : https://github.com/bit-team/backintime
Summary     : Backup tool for Linux inspired by the "flyback project"

With "/usr/share/backintime/plugins/pycache/systrayiconplugin.cpython-311.pyc" present, backintime started as root from cron fails as it fails to start the systrayiconplugin.

this was advised solved in #1592 and marked closed based on next issue of backintime solved the problem. It has not.

but removing "/usr/share/backintime/plugins/pycache/systrayiconplugin.cpython-311.pyc" does again allow backintime to function from cron as root and does place the icon in the systray.

I am unable to add a title, I cannot type within the box?

please reopen or start a new issue

I am unable to open a new issue

That interests me. Why? What have you tried and what are the errors? Did Microsoft banned you some how?

backintime --diagnostics

Traceback (most recent call last):
File "/usr/share/backintime/common/diagnostics.py", line 398, in _get_os_release
return osrelease['os-release']
~~~~~~~~~^^^^^^^^^^^^^^
KeyError: 'os-release'

This indicates to me that you do not use the latest upstream (developer) version but only the latest release.
That "os-release" error is fixed but not yet released.
This seems to be the same with the qt-probing fix. It is fixed but not yet released.

Please try the latest development version (instructions) installed from this Microsoft GitHub repo. Or please wait until 1.4.2 is released.

@ptilopteri I am not sure if you still have the issue with the fixed version of BiT (must be built and installed by yourself from our dev branch currently until we provide a new release). The distro packages are definitely not fixed until we provide our release (scheduled for end of January).

If your are still having problems with the dev version I will re-open the issue (I assumed it as "fixed" after testing and closed it prematurely I have to admit)...

@ptilopteri I have re-opend the issue.

Could you please

  1. provide me the output of backintime --diagnostics or at least backintime --version
  2. add --debug to the BiT profile used for the root cron job (in Manage profiles > Expert Options > Paste additional options to rsync) Use sudo crontab -e and add --debug to the backintime job entries (move cursur after backintime and insert --debug - this is vi so press i to enter the edit mode, after editting press ESC then :wq to save the changes...
  3. wait for the execution of the root cron job until qt5_probing.py hangs and show me the output of sudo journalctl --since "YYYY-MM-DD hh:mm" | grep -i backintime which YYYY-MM-DD hh:mm set to the date and time when cron started the scheduled job.
  4. give me some background (mainly: Was a user logged in when the cron job started; any special setup? ...)
  • aryoda @.> [01-20-24 18:22]:
    @ptilopteri I have re-opend the issue. Could you please - provide me the output of backintime --diagnostics - add --debug to the BiT profile used for the root cron job (in Manage profiles > Expert Options > Paste additional options to rsync) - provoke the hanging qt5_probing.py via the root cron job and show me the output of sudo journalctl --since "YYYY-MM-DD hh:mm" | grep -i backintime which YYYY-MM-DD hh:mm set to the date and time when cron started the scheduled job. - give me some background (mainly: Was a user logged in when the cron job started; any special setup? ...) Message ID: @.
    >
    openSUSE Tumbleweed backintime --diagnostics # backintime --diagnostics Traceback (most recent call last): File "/usr/share/backintime/common/backintime.py", line 1190, in startApp() File "/usr/share/backintime/common/backintime.py", line 507, in startApp args = argParse(None) ^^^^^^^^^^^^^^ File "/usr/share/backintime/common/backintime.py", line 568, in argParse args, unknownArgs = mainParser.parse_known_args(args)

It looks like your installation does not contain the fix(es).

Can you please double check that you have installed our most-recent dev version from Github correctly since this --diagnostics bug was already fixed one month ago.

An indicative check you could use is backintime --version which should show 1.4.2-dev (unless you have already installed the dev version a few weeks ago - then the version number is them same but it is not the most-recent dev version - please reinstall then using these instructions (and don't forget to update your clone withgit pull).

looks like I used wrong instance for "--diagnostics"

Does the cron job hang in qt5_probing.py on this instance with the most-recent BiT dev version installed (then my fix would not always work)?

the second instance started correctly and displayed systray icon. did not hang. and completed successfully tks, the dev version appears to work correctly for me and root.

Happy to hear that!

maybe re-close this issue :)

I will wait a few days to be absolutely sure.

The hanging cron jobs appeared normally only when no user is logged into a desktop environment and the cron job is running as root (best to be tested by rebooting and not logging into the DE).

Internal note as amendment to #1592 (comment):

Before closing this issue try if setting as described in connecting to user dbus as root) may also fix the problem and explains the reason for the missing authorization:

Use BusConnection instead of SessionBus and specify the address explicitly, combined with seteuid to gain access.

  • aryoda @.***> [01-21-24 16:16]:

unfortunately, the root cron job hung again this am. killing qt5_probing.py allowed it to continue and completed successfully

Can you follow the steps 2 - 4 in #1592 (comment) to help me diagnose this?

Step 1 (diagnostics) is not required but backintime --version to make sure that your installed dev version was not overwritten by an older version in an overnight "update" of distro packages...

qt5_probing.py appears to hang immediately journalctl provide no output

journalctl as described in 3. of #1592 (comment) must provide some output if you have added --debug (see 2. in the linked comment) before the root cron job hangs
and the --since time and date is correct (= right before the execution time of the cron job that hangs)
Could you please check this again?

Just to be sure: You let BiT add the cron job - not by hand - this my be a different situation...

I do see an instance (or several) appears to start with the root cron job:
/usr/bin/dbus-launch.x11 --autolaunch=3384cbc37a574dcc99445aa25cd2db04
/--binary-syntax --close-stderr
which puts on a load related to X

Excellent observation, THX!

From https://stackoverflow.com/questions/71425861/connecting-to-user-dbus-as-root I know this is an permission issue and might be solved via setuid to 1000 (first user) but this quite a hard-coded assumption that might not always work neither.

I tend to completely disable the systray icon for root cron jobs (but keep it in BiT root) since showing a systray icon from a root cron job by "hijacking" a user DBus and X11 session is not reliable (but a code legacy in BiT)...

absolutely sure there was no output and the date:time was the time BiT was initiated. on the ensuing test, I will start journalctl search one minute prior to the cron job start time.

Perhaps this is a time zone issue...

Could you please give sudo journalctl -b SYSLOG_IDENTIFIER=backintime a try after the cron job blocked and you have killed qt5_probing.py...?

Meanwhile I am trying to configure my OpenSuse VM to reproduce this

process did not continue, rsync faulted: notify-send Back In Time (root) : tumbleweed 'rsync' ended with exit code 1: See 'man rsync' for more details

Please forgive me, I was totally stupid with my instructions to append --debug and cause an rsync error with that. Please instead do:

  1. add --debug to the BiT profile used for the root cron job (in Manage profiles > Expert Options > Paste additional options to rsync) Use sudo crontab -e and add --debug to the backintime job entries (move cursor after backintime and insert --debug - this is vi so press i to enter the edit mode, after editting press ESC then :wq to save the changes...

and remove the --debug from Manage profiles > Expert Options > Paste additional options to rsync

@ptilopteri No need for actions on your side anymore - I can reproduce the hanging qt5_probing.py in my VM when I do not log in at all after rebooting... Please give me some time to debug this...

@ptilopteri I have attached a patch to check if the user rights are in fact the problem on your setup (in my VM it did fix the hanging qt5_probing.py).

1592_qt5probing.txt

You can apply the patch from the folder you have downloaded the patch file via:

sudo patch /usr/share/backintime/common/qt5_probing.py < 1592_qt5probing.txt

Note: Github does not allow to upload *.patch files so it is a *.txt file ;-)

Could you please check if this fixes the problem (and does not cause others ;-) ? Thanks a lot!

PS: Please not that this will most-probably not be the final solution since it assumes that user id 1000 has an X11 display session; I just want to test if it works at all...

applied patch(txt) file ran root cron job qt5_probing.py still hangs with accompanying dbus-launch.x11 killing both instances allows root cron job to complete

hm, either the patch does not work or it could not be applied completely...

Successful patching shows something like this on my machine:

> sudo patch /usr/share/backintime/common/qt5_probing.py < 1592_qt5probing.txt
patching file /usr/share/backintime/common/qt5_probing.py
Hunk #1 succeeded at 11 with fuzz 2 (offset 6 lines).
Hunk #2 succeeded at 114 with fuzz 2 (offset 22 lines).

I just saw in your logs that BiT seems to be installed in a different folder (not our default). I did not mention this but did you change the target folder of the patch to the location of the installed qt5_probing.py? It should look then similar to this:

sudo patch //data/build/backintime_repos/usr/bin/backintime < 1592_qt5probing.txt

BTW: To undo the patch just run it again and answer the question with Yes:

> sudo patch /usr/share/backintime/common/qt5_probing.py < 1592_qt5probing.txt
patching file /usr/share/backintime/common/qt5_probing.py
Reversed (or previously applied) patch detected!  Assume -R? [n] 
Apply anyway? [n] y
Hunk #1 succeeded at 7 with fuzz 2 (offset 2 lines).
Hunk #2 succeeded at 94 (offset 2 lines).

but I have not seen a tray icon.

Not even when starting a backup from within root's BiT GUI?

@ptilopteri

no, running backup now from gui and no tray icon

OK, I have tried to reproduce this in my VM with KDE Plasma but it I can see the BiT systray icon while taking a snapshot...

  1. Which Desktop Environment are you using (Gnome, Xfce...?), any special window manager?

  2. Could you please uninstall the patch and send me the output of

    backintime-qt_polkit --debug

    from a console while logged in as a user in your desktop environment
    and after starting one backup (take a snapshot)?

    Perhaps I can find a clue in the logs...

    You can remove or replace sensitive data in the log output...

I will provide your requested information but need occasion to adjust system to use 1.4.2-dev

THX a lot in advance!

fwiw: I am more interested in the backup performing correctly than seeing
a system tray icon.

I fully understand this and we already tried to pin down this problem with many Q&As.
So I propose that after providing me above debug log we put this issue "on hold" so that you don't have to deliver more answers since I guess there is not much more you can do. I then have to find a setup to reproduce this myself.
THX a lot for keeping your patience with me ;-)

I am already thinking of giving up launching the systray icons for root cron jobs since we misuse X11/Wayland and dbus here by hijacking user sessions as root. The future my possible see just a user systray icon that may be configured to "listen" to root BiT cron jobs too...

openSUSE Tumbleweed has no directory, "/usr/share/qt" but does have
"/usr/share/qt5" and "/usr/share/qt6"

Yes, same on my Tumbleweed VM. I have just check our Makefiles but these folders are not used.
Any files go into /usr/share/backintime/...

well, I copied them directly to /usr/share/qt if BiT cannot find them there, tell me where to put them. shame BiT cannot be configured for "make install" and "make uninstall". would make trouble shooting much easier and cleaner.

Ah, I didn't mention this but you can build, install and even uninstall from the source following these instructions:

https://github.com/bit-team/backintime/blob/dev/CONTRIBUTING.md#build-and-install-via-make-system-recommended

But it requires to install some more dev-relelated packages also documented in above link...
Hint: If you install the BiT distro packages first and then overwrite them with the dev version most of the dependencies are already fulfilled.

This is the safest way to install BiT (CLI and GUI).

The target installation folder is then by default /usr/share/backintime