INFO: task btrfs:103945 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Until eventually
Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
So I'm looking forward to getting an actual count of how often this happens without needing to babysit the warning suppressions and count the incidents myself.
A background thread performing blocking io is an implementation detail not a bug. Other filesystems don’t have/need that sort of bookkeeping, so if a block device stalls badly enough to trigger these warnings then it will be attributed to application threads (if at all) rather than btrfs worker threads, but regardless the stall very much still happens
I am curious - is this message indicative of a problem in the fs? I would have assumed anything marked "INFO" is, tautologically, not an error, but surely a filesystem shouldn't be locking up? Or is it just suggestive of high system load or poor hardware performance?
In my experience, "hung task" is almost always due to running out of RAM and the scheduler constantly thrashing instead of doing useful work. I rarely actually reach the point of seeing the message since I'll sysrq-kill if early enough, or else hard-reboot.
Note also that modern filesystems do a lot of background work that doesn't strictly need to be done immediately for correctness.
(of course, it also seems common for people to completely disregard the well-documented "this feature is unreliable, don't use it" warnings that btrfs has, then complain that they have problems and not mention that they ignored the warnings until everyone is halfway through complaining)
The only problems I've encountered in all my years of using btrfs are:
* when (all copies of) a file bitrots on disk, you can't read it at all, rather than being able to copy the mostly-correct file and see if you can hand-correct it into something usable
* if you enable new compression algorithms on your btrfs volume, you can't read your data from old kernels (often on liveusb recovery disks)
* fsync is slow. Like, really really slow. And package managers designed for shitty CoW-less filesystems use fsync a lot.
It could be any of the above, I'd say it's info because the kernel itself is not in an error state, it's information about a process doing something unusual
That the in-kernel code for btrfs locks up should never happen at all. There is a rumor going around that btrfs never reached maturity and suffers from design issues.
ext4 works fine on my Linux laptop and I agree, it's proven itself over many years to be supremely reliable, though it doesn't compare in features to the more complex filesystems.
On my home media server, however, I'm using ZFS in a RAID array, with regular scrubs and snapshots. ZFS has many features like RAID, scrubs, COW, snapshots, etc. that you just don't get on ext4. However, unlike btrfs, ZFS seems to have a great reputation for reliability with all its features.
Granted it was at least a decade ago but the team I was on had a terrible experience with ZFS and that bad taste still lingers. And I don’t need any of its features.
Just to double check my understanding (because being wrong on the internet is perhaps the fastest way to get people to check your work):
Is this saying that regular tasks that haven't been scheduled for two minutes and tasks that are uninterruptible (truly so, not idle or also killable despite being marked as uninterruptible) that haven't been woken up for two minutes are counted?
Not the same thing by any means - they don't indicate something is wrong with kernel or hardware.
The zombie process state is a normal transient state for all exiting processes where the only remaining function of the process is as a container for the exiting process's id and exit status; they go away once the parent process calls some flavor of the "wait" system call to collect the exit status. A pileup of zombies indicates a userspace bug: a negligent parent process that isn't collecting the exit status in a timely manner.
Additionally, there are a few more process accounting things, rusage, that zombie processes hold until reaped. See wait3(2), wait4(2) and getrusage(2).
My dmesg is already constantly full of
Until eventually So I'm looking forward to getting an actual count of how often this happens without needing to babysit the warning suppressions and count the incidents myself.You could leave this problem behind by switching to a filesystem that isn't full of deadlock bugs.
A background thread performing blocking io is an implementation detail not a bug. Other filesystems don’t have/need that sort of bookkeeping, so if a block device stalls badly enough to trigger these warnings then it will be attributed to application threads (if at all) rather than btrfs worker threads, but regardless the stall very much still happens
I am curious - is this message indicative of a problem in the fs? I would have assumed anything marked "INFO" is, tautologically, not an error, but surely a filesystem shouldn't be locking up? Or is it just suggestive of high system load or poor hardware performance?
In my experience, "hung task" is almost always due to running out of RAM and the scheduler constantly thrashing instead of doing useful work. I rarely actually reach the point of seeing the message since I'll sysrq-kill if early enough, or else hard-reboot.
Note also that modern filesystems do a lot of background work that doesn't strictly need to be done immediately for correctness.
(of course, it also seems common for people to completely disregard the well-documented "this feature is unreliable, don't use it" warnings that btrfs has, then complain that they have problems and not mention that they ignored the warnings until everyone is halfway through complaining)
The only problems I've encountered in all my years of using btrfs are:
* when (all copies of) a file bitrots on disk, you can't read it at all, rather than being able to copy the mostly-correct file and see if you can hand-correct it into something usable
* if you enable new compression algorithms on your btrfs volume, you can't read your data from old kernels (often on liveusb recovery disks)
* fsync is slow. Like, really really slow. And package managers designed for shitty CoW-less filesystems use fsync a lot.
> In my experience, "hung task" is almost always due to running out of RAM
In my case, I don't think this machine ever commits more than around 5GB of its 32GB available memory, so I doubt it's that.
> it also seems common for people to completely disregard the well-documented "this feature is unreliable, don't use it" warnings that btrfs has
Now that I am definitely doing. I won't give up raid6 until it eats all my data for a fourth time.
It could be any of the above, I'd say it's info because the kernel itself is not in an error state, it's information about a process doing something unusual
That the in-kernel code for btrfs locks up should never happen at all. There is a rumor going around that btrfs never reached maturity and suffers from design issues.
That's why I use ext4 exclusively on linux. Never once had a filesystem issue.
ext4 works fine on my Linux laptop and I agree, it's proven itself over many years to be supremely reliable, though it doesn't compare in features to the more complex filesystems.
On my home media server, however, I'm using ZFS in a RAID array, with regular scrubs and snapshots. ZFS has many features like RAID, scrubs, COW, snapshots, etc. that you just don't get on ext4. However, unlike btrfs, ZFS seems to have a great reputation for reliability with all its features.
Granted it was at least a decade ago but the team I was on had a terrible experience with ZFS and that bad taste still lingers. And I don’t need any of its features.
Given the mailing History with Linus I wouldn't be surprised
I was planning on it but the filesystem I wanted to switch to keeps getting set back by the author's CoC drama
What counts as a hung task? Blocking on unsatisfiable I/O for more than X seconds? Scheduler hasn’t gotten to it in X seconds?
If a server process is blocking on accept(), wouldn’t it count as hung until a remote client connects? or do only certain operations count?
torvalds/linux//kernel/hung_task.c :
static void check_hung_task(struct task_struct *t, unsigned long timeout) https://github.com/torvalds/linux/blob/9f16d5e6f220661f73b36...
static void check_hung_uninterruptible_tasks(unsigned long timeout) https://github.com/torvalds/linux/blob/9f16d5e6f220661f73b36...
Just to double check my understanding (because being wrong on the internet is perhaps the fastest way to get people to check your work):
Is this saying that regular tasks that haven't been scheduled for two minutes and tasks that are uninterruptible (truly so, not idle or also killable despite being marked as uninterruptible) that haven't been woken up for two minutes are counted?
Your and the Llama's explanations would make good comments for the source and/or the docs if true.
And there's https://en.wikipedia.org/wiki/Zombie_process too
Not the same thing by any means - they don't indicate something is wrong with kernel or hardware.
The zombie process state is a normal transient state for all exiting processes where the only remaining function of the process is as a container for the exiting process's id and exit status; they go away once the parent process calls some flavor of the "wait" system call to collect the exit status. A pileup of zombies indicates a userspace bug: a negligent parent process that isn't collecting the exit status in a timely manner.
Additionally, there are a few more process accounting things, rusage, that zombie processes hold until reaped. See wait3(2), wait4(2) and getrusage(2).