Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After running for several days, dunst no longer displays notifications #1186

Open
MichaelSheely opened this issue Jul 20, 2023 · 10 comments
Open

Comments

@MichaelSheely
Copy link

Issue description

The first time this happened, I thought it was a fluke. It has now happened a second time.

After several days (will get a more accurate estimate in a few days) dunst claims to be working, but no notifications are displayed.

A short-term workaround is to manually kill dunst and relaunch.

Installation info

  • Version: 1.9.0 (2022-06-27)
  • Install type: package
  • Window manager / Desktop environment: i3
  • Distro: DebianTesting

Configuration Details

dunstrc have not created a custom dunstrc, using the builtin default.

Repro (after waiting several days)

After noticing the issue had recurred again today, I set about trying to determine the cause.

$ notify-send Test "this is a test"

No stdout or stderr, no popup displayed.

$ dunstify --action="replyAction,reply" "Message received"
1                       

No popup displayed.

Test with browser notifications:

Permission to display: granted
Notification #1 queued for display
Notification #1 showed

Also tried this in firefox. Same exact result.

Debugging Attempts

Confirm dunst is running.

$ ps -eF | grep dunst
msheely     5750    3552 44 178723 47088  1 Jul05 ?        6-13:58:51 /usr/bin/dunst

Try the debug command, ensure not paused.

$ dunstctl debug
dunst version: 1.9.0
$ dunstctl is-paused
false

Check status.

$ systemctl --user status dunst
● dunst.service - Dunst notification daemon
     Loaded: loaded (/usr/lib/systemd/user/dunst.service; static)
     Active: active (running) since Wed 2023-07-05 13:42:59 PDT; 2 weeks 0 days ago
       Docs: man:dunst(1)
   Main PID: 5750 (dunst)
      Tasks: 3 (limit: 76522)
     Memory: 32.5M
        CPU: 6d 13h 42min 19.855s
     CGroup: /user.slice/user-391236.slice/user@391236.service/app.slice/dunst.service
             └─5750 /usr/bin/dunst

Jul 05 13:42:59 [HOSTNAME] systemd[3552]: Starting dunst.service - Dunst notification daemon...
Jul 05 13:42:59 [HOSTNAME] systemd[3552]: Started dunst.service - Dunst notification daemon.

Check dunst server info.

$ dbus-send --session --dest=org.freedesktop.Notifications --print-reply /org/freedesktop/Notifications org.freedesktop.Notifications.GetServerInformation
method return time=1689874703.084915 sender=:1.60 -> destination=:1.459 serial=787 reply_serial=2
   string "dunst"
   string "knopwob"
   string "1.9.0 (2022-06-27)"
   string "1.2"

Used the following to kill and attempt to relaunch:

$ dunst # display pid of existing dunst process
CRITICAL: [dbus_cb_name_lost:1044] Cannot acquire 'org.freedesktop.Notifications': Name is acquired by '[]@' with PID '5750'.
$ kill 5750 && dunst  # try kill and relaunch
CRITICAL: [dbus_cb_name_lost:1047] Cannot acquire 'org.freedesktop.Notifications'.
$ # try with systemctl again
$ systemctl --user status dunst
● dunst.service - Dunst notification daemon                                                                                                                                                                                                                                                
     Loaded: loaded (/usr/lib/systemd/user/dunst.service; static)                                                                                                                                                                                                                          
     Active: active (running) since Thu 2023-07-20 12:40:56 PDT; 4s ago                                                                                                                                                                                                                    
       Docs: man:dunst(1)            
   Main PID: 3896948 (dunst)
      Tasks: 4 (limit: 76522)
     Memory: 3.8M
        CPU: 35ms
     CGroup: /user.slice/user-391236.slice/user@391236.service/app.slice/dunst.service
             └─3896948 /usr/bin/dunst

Jul 20 12:40:56 [HOSTNAME] systemd[3552]: Starting dunst.service - Dunst notification daemon...
Jul 20 12:40:56 [HOSTNAME] systemd[3552]: Started dunst.service - Dunst notification daemon.

While the initial attempt to kill and failed, relaunching with systemctl is successful.

Now all notifications work. Will update with details of if I am able to reproduce which action causes notifications to stop working again after several days.

@ShellCode33
Copy link
Contributor

ShellCode33 commented Jul 23, 2023

Try to list coredumps to see if dunst crashed:

coredumpctl list

If it did, make sure gdb is installed and set the following environment variable:

export DEBUGINFOD_URLS="https://debuginfod.debian.net"

And run:

coredumpctl debug PID_OF_DUNST_FROM_COREDUMPCTL_OUTPUT

gdb will download all the required symbols from the debuginfod server, and at some point you will be dropped into a gdb shell, run the backtrace command and post its output here. You should see something like this:

image

@MichaelSheely
Copy link
Author

Thanks for the response!

It took some time to reproduce again, but just today I confirmed notifications are not showing up.

I did not have systemd-coredump installed, but I have installed it now.

It returns

$ coredumpctl list
No coredumps found.

but I'm not familiar with the tool (I could imagine it would only work if it is installed during a crash).

I don't think it crashed though, given that is still appears in the list of running processes:

$ ps aux | grep dunst
msheely     5788 29.3  0.0 604552 36124 ?        Ssl  Jul31 4242:41 /usr/bin/dunst

Is there anything else I should try now, or just go ahead and manually kill and relaunch as I did before and check for coredumps again the next time it stops displaying notifications?

@ShellCode33
Copy link
Contributor

ShellCode33 commented Aug 11, 2023

I could imagine it would only work if it is installed during a crash.

Correct

You might also want to check man systemd-coredump, enabling the service might be required if it's not already.

I don't think it crashed though, given that is still appears in the list of running processes

Doesn't mean it didn't crash, dunst is automatically invoked by dbus if it's not already running when a notification is received. You might want to check the output of journalctl --user -u dunst to see if anything of interest is there

@MichaelSheely
Copy link
Author

MichaelSheely commented Aug 11, 2023

enabling the service might be required

I believe it was automatically enabled when I installed it based on the return value of this command:

$ sudo sysctl -a | grep kernel.core_pattern
kernel.core_pattern = |/lib/systemd/systemd-coredump %P %u %g %s %t 9223372036854775808 %h

I also tested by sending $ kill -s SIGSEGV 297858 to a vim process I had running, after which there was one coredump available:

$ coredumpctl list
TIME                           PID    UID   GID SIG     COREFILE EXE               SIZE
Fri 2023-08-11 08:48:47 PDT 297858 391236 89939 SIGSEGV present  /usr/bin/vim.gtk3 3.8M

dunst is automatically invoked by dbus if it's not already running when a notification is received

Ah good to know. I have tested with multiple notifications once I notice one that didn't come through though. (My thinking here being that if the only problem was that dunst crashed, and then was restarted by the next notification, it would seem that the restart would fix the issue, and then subsequent notifications should be displayed without issue -- which doesn't match observations). But I suppose something could happen that gets even the relaunched dunst into a bad state, but it's curious that manually killing that dunst process and relaunching seems to be a temporary fix.

check the output of journalctl --user -u dunst to see if anything of interest is there

Thanks! You are correct, there are three status=1/Failure messages. Here is the full output:

$ journalctl --user -u dunst
Jun 20 20:59:01 [HOSTNAME] dunst[6515]: XIO:  fatal IO error 11 (Resource temporarily unavailable) on X server ":20"
Jun 20 20:59:01 [HOSTNAME] dunst[6515]:       after 8008989874 requests (8008989874 known processed) with 361 events remaining.
Jun 20 20:59:01 [HOSTNAME] systemd[4349]: dunst.service: Main process exited, code=exited, status=1/FAILURE
Jun 20 20:59:01 [HOSTNAME] systemd[4349]: dunst.service: Failed with result 'exit-code'.
Jun 20 20:59:01 [HOSTNAME] systemd[4349]: dunst.service: Consumed 1w 5d 14h 16min 27.419s CPU time.
-- Boot 1a632811a92c4ca182720e427aca23cb --
Jun 21 13:05:17 [HOSTNAME] systemd[4944]: Starting dunst.service - Dunst notification daemon...
Jun 21 13:05:17 [HOSTNAME] systemd[4944]: Started dunst.service - Dunst notification daemon.
Jun 29 13:15:26 [HOSTNAME] systemd[4944]: dunst.service: Consumed 1d 16h 24min 55.850s CPU time.
Jun 29 13:15:58 [HOSTNAME] systemd[4944]: Starting dunst.service - Dunst notification daemon...
Jun 29 13:15:58 [HOSTNAME] systemd[4944]: Started dunst.service - Dunst notification daemon.
Jul 05 13:20:58 [HOSTNAME] dunst[110777]: X connection to :1 broken (explicit kill or server shutdown).
Jul 05 13:20:58 [HOSTNAME] systemd[4944]: dunst.service: Main process exited, code=exited, status=1/FAILURE
Jul 05 13:20:58 [HOSTNAME] systemd[4944]: dunst.service: Failed with result 'exit-code'.
Jul 05 13:20:58 [HOSTNAME] systemd[4944]: dunst.service: Consumed 28min 6.736s CPU time.
-- Boot eb4d3830155d4f73be38b7b0c816c49d --
Jul 05 13:42:59 [HOSTNAME] systemd[3552]: Starting dunst.service - Dunst notification daemon...
Jul 05 13:42:59 [HOSTNAME] systemd[3552]: Started dunst.service - Dunst notification daemon.
Jul 20 12:40:33 [HOSTNAME] systemd[3552]: dunst.service: Consumed 6d 15h 18min 59.742s CPU time.
Jul 20 12:40:56 [HOSTNAME] systemd[3552]: Starting dunst.service - Dunst notification daemon...
Jul 20 12:40:56 [HOSTNAME] systemd[3552]: Started dunst.service - Dunst notification daemon.
Jul 20 15:04:49 [HOSTNAME] dunst[3896948]: WARNING: Unsupported mouse button: '4'
Jul 27 14:50:41 [HOSTNAME] dunst[3896948]: WARNING: Unsupported mouse button: '4'
Jul 31 10:07:31 [HOSTNAME] dunst[3896948]: X connection to :1 broken (explicit kill or server shutdown).
Jul 31 10:07:31 [HOSTNAME] systemd[3552]: dunst.service: Main process exited, code=exited, status=1/FAILURE
Jul 31 10:07:31 [HOSTNAME] systemd[3552]: dunst.service: Failed with result 'exit-code'.
Jul 31 10:07:31 [HOSTNAME] systemd[3552]: dunst.service: Consumed 20min 26.071s CPU time.
-- Boot 685c56b63bc04cc8b2020d48e932783f --
Jul 31 10:10:57 [HOSTNAME] systemd[3578]: Starting dunst.service - Dunst notification daemon...
Jul 31 10:10:57 [HOSTNAME] systemd[3578]: Started dunst.service - Dunst notification daemon.

@MichaelSheely
Copy link
Author

I'll go ahead and restart the service today if there is nothing further we can glean from this instance and update again when/if I next observe a missed notification, and will upload any applicable core dumps.

@ShellCode33
Copy link
Contributor

X connection to :1 broken (explicit kill or server shutdown).

and

fatal IO error 11 (Resource temporarily unavailable) on X server ":20"

Seem to suggest that you spawned multiple X11 servers, was it intended behavior ? If it was not, I would try to understand why it's happening.

@fwsmit
Copy link
Member

fwsmit commented Jan 14, 2024

Did it happen again? If not I think this issue can be closed. It could be something weird about your setup if there are issues connecting to Xorg

@MichaelSheely
Copy link
Author

Yes, I can reliably reproduce this issue given a few days. I am still able to fix by killing and restarting dunst.

After Shell suggested multiple X11 servers I reached out to the xorg list.

On Tuesday I can check the logs mentioned in this reply again. The other reply suggests that this must be happening locally, but that multiple logins could create multiple X11 servers. There is only one use account on the machine in question, so the only idea I have is that perhaps something strange is going on with the screen lock, resulting in multiple logins -> multiple X11 servers -> problems for dunst until relaunched. This doesn't seem to match the observations though, since entering the password at the lock screen seems to bring back the exact state the session was left in beyond the passage of time.

@fwsmit
Copy link
Member

fwsmit commented Jan 16, 2024

A workaround would be to set the service to restart when it fails in systemd (Restart=always)

@bynect
Copy link
Member

bynect commented Mar 5, 2024

Did you resolve it in the end?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants