From the name and description, I expected this to perform operations on file path strings, like convert relative to absolute (and vice versa), expand symlinks, convert unix paths to dos, etc. This is more like a find command.
I don't see why it necessarily couldn't, my only question would be if there are really many actual use cases for such things? As far as symlinks go, I suppose being able to expand them (but not following them!) might be somewhat useful. But converting to DOS paths and vice-versa? That just doesn't seem very useful. Nevermind converting to-and-fro relative and absolute paths, I can't even imagine what the point of that would be. But perhaps I'm just not seeing the forest for the trees, as they say.
As a rule of thumb I always make paths absolute when handling files in scripts. But then sometimes I need to copy a directory tree relative to $CWD somewhere else, so I convert them back to relative
Fish, being a great shell, provides this via `path` command[0]
I would say the default behaviour just isn't very ergonomic. Suppressing warnings for example requires piping to /dev/null (whereas `path` supresses permission warnings by default), if you want to limit the number of results you have to pipe the output to another command, getting xargs-like behaviour (obviously), or putting quotes around lines with embedded spaces, there are simply more hoops to jump through. It's much easier to type "path -sf .jpg .jpeg .png" than whatever would be required to get the `find` utility to do the same. (Or, say, finding all node_modules folders with "path -z n_m", it's just so much more satisfying.) But yes, these are mostly just syntactic-sugar kinds of issues. Aside from that (and perhaps the lack of cross-platform compatibility), I would say there is nothing inherently deficient about the `find` command. It's a work-horse which probably has more features than `path` does. But the latter really is growing on me. It is actually quite fun to use, if I may say so myself!
Looks like it has a pretty good interface as well. It does however seem a just a bit too top-heavy (lot's of dependencies) not to mention a few more bugs than I particularly care for. But sheesh, 37K stars, it must be good for something!
It's good for finding files fast, and piping the resulting file paths into other tools for further action / handling. It does what it claims to do and does it well. :)
> for the primary purpose of helping other programs know where to find stuff
Potential footgun to make a program rely on this to locate, say, a shared library (as in one of the examples), if there’s a possibility that someone has smuggled a malware’d version of it into, say, /tmp, since it defaults to searching the root directory.
Kind of, but also kind of not. I mean if someone can smuggle a file into some random directory, chances are they have enough access to write directly to the "correct" folder to begin with. Personally I wouldn't execute or otherwise load any sort of executable content from a non-root directory (although certainly there are many people who wouldn't even think twice before doing such a thing). So it really just boils down to having a sane security-policy. Restrict searches with something like "path -d /usr *" and you are guaranteed not to scoop-up something that was world-writable in the first place. In fact in the example given in the README, that is precisely how that would have worked. Both /lib32 and /lib64 are owned by "root" and hence not a concern.
Well I ran a bunch of tests and it turns out that the performance wasn't actually impacted very much after all. So the changes are official. I also made some other adjustments to the default behaviour; if no pattern is specified then it just matches everything. In other words, "path -f" prints every regular file in the filesystem (starting in the current one). Anyway, thanks for the suggestion, otherwise I may not gone down that (decidedly satisfying) rabbit-hole!
I did actually consider that at one point, but eventually decided against it because I felt would have meant a sacrifice in performance; first you'd do the local search, then start at the very top and recurse back down, checking every single entry against the local path to be sure that you don't do the local traversal all over again. Fortunately the code base is very clean and straight-forward, so it would be a fairly trivial excercise to just fork the repo and make those changes yourself to get that kind of behaviour.
From the name and description, I expected this to perform operations on file path strings, like convert relative to absolute (and vice versa), expand symlinks, convert unix paths to dos, etc. This is more like a find command.
I don't see why it necessarily couldn't, my only question would be if there are really many actual use cases for such things? As far as symlinks go, I suppose being able to expand them (but not following them!) might be somewhat useful. But converting to DOS paths and vice-versa? That just doesn't seem very useful. Nevermind converting to-and-fro relative and absolute paths, I can't even imagine what the point of that would be. But perhaps I'm just not seeing the forest for the trees, as they say.
As a rule of thumb I always make paths absolute when handling files in scripts. But then sometimes I need to copy a directory tree relative to $CWD somewhere else, so I convert them back to relative
Fish, being a great shell, provides this via `path` command[0]
[0]: https://fishshell.com/docs/current/cmds/path.html
I've been finding nushell's `ls` with a where clause is pretty good for this. There's also the `find` command too.
What can this do that standard Unix find can not do?
I would say the default behaviour just isn't very ergonomic. Suppressing warnings for example requires piping to /dev/null (whereas `path` supresses permission warnings by default), if you want to limit the number of results you have to pipe the output to another command, getting xargs-like behaviour (obviously), or putting quotes around lines with embedded spaces, there are simply more hoops to jump through. It's much easier to type "path -sf .jpg .jpeg .png" than whatever would be required to get the `find` utility to do the same. (Or, say, finding all node_modules folders with "path -z n_m", it's just so much more satisfying.) But yes, these are mostly just syntactic-sugar kinds of issues. Aside from that (and perhaps the lack of cross-platform compatibility), I would say there is nothing inherently deficient about the `find` command. It's a work-horse which probably has more features than `path` does. But the latter really is growing on me. It is actually quite fun to use, if I may say so myself!
“A more ergonomic find command” is a nice elevator pitch.
cross platform support, according to the description.
fd exists https://github.com/sharkdp/fd
Looks like it has a pretty good interface as well. It does however seem a just a bit too top-heavy (lot's of dependencies) not to mention a few more bugs than I particularly care for. But sheesh, 37K stars, it must be good for something!
> ... "it must be good for something!"
It's good for finding files fast, and piping the resulting file paths into other tools for further action / handling. It does what it claims to do and does it well. :)
> for the primary purpose of helping other programs know where to find stuff
Potential footgun to make a program rely on this to locate, say, a shared library (as in one of the examples), if there’s a possibility that someone has smuggled a malware’d version of it into, say, /tmp, since it defaults to searching the root directory.
Kind of, but also kind of not. I mean if someone can smuggle a file into some random directory, chances are they have enough access to write directly to the "correct" folder to begin with. Personally I wouldn't execute or otherwise load any sort of executable content from a non-root directory (although certainly there are many people who wouldn't even think twice before doing such a thing). So it really just boils down to having a sane security-policy. Restrict searches with something like "path -d /usr *" and you are guaranteed not to scoop-up something that was world-writable in the first place. In fact in the example given in the README, that is precisely how that would have worked. Both /lib32 and /lib64 are owned by "root" and hence not a concern.
Naturally every footgun is guaranteed to be safe as long as you use it right :)
I wonder if a safer default would be to start searches at the current directory rather than the root directory?
Well I ran a bunch of tests and it turns out that the performance wasn't actually impacted very much after all. So the changes are official. I also made some other adjustments to the default behaviour; if no pattern is specified then it just matches everything. In other words, "path -f" prints every regular file in the filesystem (starting in the current one). Anyway, thanks for the suggestion, otherwise I may not gone down that (decidedly satisfying) rabbit-hole!
I did actually consider that at one point, but eventually decided against it because I felt would have meant a sacrifice in performance; first you'd do the local search, then start at the very top and recurse back down, checking every single entry against the local path to be sure that you don't do the local traversal all over again. Fortunately the code base is very clean and straight-forward, so it would be a fairly trivial excercise to just fork the repo and make those changes yourself to get that kind of behaviour.