Thanks for all the feedback! Let me clarify a few things about lla.
The most amazing part of this project wasn't just building another ls alternative - it was the incredible learning journey. Building a systems tool in Rust while implementing a plugin architecture taught me more in a few weeks than months of reading could have.
Yes, it does more than traditional ls, and that's intentional. The plugin system came from scratching my own itch of constantly switching between different terminal tools. Each feature added was a chance to dive deeper into systems programming and Unix internals.
The performance still needs work, and the documentation could be better. But that's the beauty of open source - you ship it, learn from the feedback, and keep improving. Building in public is an incredible way to level up your skills.
For anyone considering a similar project: pick a common tool you use daily and try reimagining it. You'll be surprised how much you learn along the way.
Thank you for being one of the few projects replacing a POSIX tool which properly sets the expectation that it's for personal use. It causes me no end of consternation that I see many tools introduced which provide only the barest minimum of functionality and skip over extended attributes, ACLs, and fail to keep compatibility with flags, or don't properly separate STDOUT & STDERR.
While these may be sufficient for a naive developer, this oversight then breaks many downstream tools.
Again though, thanks for sharing. Bringing your own spin and ideas into the world can be anxiety inducing and I'm pleased you went about this in a helpful and measured way!
If this is causing you any "consternation" at all, it means you expect too much from unpaid free software developers. The repository doesn't even have a sponsor link.
The software is provided as is, in the hope it will be useful, but without any warranty whatsoever.
All free and open source software licenses contain some version of the above statement.
All of this is implicitly for personal use. In the sense that it's not a product, just something people made because they needed a problem solved.
Honestly, what's the point of comments like this? No shit, it's done for a personal hobby, you're not breaking new ground with that idea.
However, this is a website of opinions, and gp's opinion is valid, because this forum is where opinions go. It's not as though gp said to stop doing this project.
For what it's worth, I think that both the parent and the grandparent are valid opinions.
One says that open source projects should clearly state when they are not meant as a serious replacement for standard tools. The other says that they disagree and that open source projects don't have to give any warning.
I guess I am a little in-between: if you open source your code, I don't think you have anything to do (it's already nice to put an open source license on it). If you advertise your open source tool (e.g. on this website), then it is polite to set expectations.
The point is expressing my opinion as a fellow free software developer. Mine is just as valid as yours or theirs. And I didn't say their opinions were invalid to begin with.
These hidden "expectations" that people seem to have regarding free and open source software can be incredibly demoralizing. It's something I wish would change. That's why I commented on it.
> These hidden "expectations" that people seem to have regarding free and open source software
Taking this to a logical conclusion; If a plumber/lawyer/<professional> offers services for free, and those services end up killing, or massively damaging someone, can they just say the same thing to absolve themselves of all liability?
I also wish to change things, in the opposite direction. FOSS devs should explicitly mark things as not-for-prod; rather than pushing things as prod-ready when they aren't. I think some kind of change will come upon FOSS in the future so people can rely on it, and sadly I think that change will be adoption by corporates (w/ legal budgets) rather than the FOSS devs/ORGs themselves becoming more mature.
> If a plumber/lawyer/<professional> offers services for free, and those services end up killing, or massively damaging someone, can they just say the same thing to absolve themselves of all liability?
There is a world of difference between what professionals such as doctors do and what free and open source developers do. It's not even remotely the same. I know because I happen to be both.
And even if they were in any way comparable, professionals get paid handsomely precisely because of the liability and responsibility. If people want this out of free software developers, they should start paying them some serious money.
> I also wish to change things, in the opposite direction.
If you want this, hire a professional to do it for you instead of pushing unwanted responsibility and liability onto the rest of us. I've got more than enough of that at my actual job and I absolutely do not want it in my free software development hobby. Adding liability to free software will kill it.
> FOSS devs should explicitly mark things as not-for-prod; rather than pushing things as prod-ready when they aren't.
It already is. Everything under a free or open source software license is already marked as such. The license says so. You use it at your own risk. Up to you to determine if that's good enough for you to use in production.
> professionals such as doctors do and what free and open source developers do
You are right - there is less in the way of personal liability in the case of devs (but for the odd PII here and there), Precisely why I think there will be a disruption coming.
> professionals get paid handsomely precisely because of the liability and responsibility
Devs are, or can also be, well-paid 'professionals'. And all are still capable of free, pro-bono work.
> instead of pushing unwanted responsibility and liability onto the rest of us
I'm not sure what you think I said..
"rather than pushing things as prod-ready when they aren't"
Is wrt the promotion of something as ready-for-production.
I'm not addressing the legal status as dictated by the (unproven) licence, which isn't relevant wrt liability anyway.
It's "shit" to you. To its creators, it's an awesome piece of software that does exactly what they wanted it to do and is as simple as they wanted it to be. They liked it enough that they thought it was worth sharing here. Maybe they don't need or care about extended attributes, and that's fine.
If you want people to work on things they don't care about, you should consider hiring them or sponsoring their work. Either way, remember to thank them for their generosity. They did give you the source code and the freedom to use and modify it, after all. Copyright law says people are entitled to neither.
If you won't pay them to get them to work on the features you want, it's entirely within your power to do it yourself. That freedom is not the default, it is a privilege that is given. People should be thankful for it.
Even if the code sucked, you have the freedom to read it and learn from its mistakes and see what it does so you can make a better version. And the world is richer and wealthier because of it.
I've saved plenty of human lives. I don't brag about it either. I rarely mention what I do for a living here.
This is not about virtue signaling. It's about business. You are not paying us. Quit demanding free work from us, and keep those expectations in check.
It's also about basic respect and manners. Someone shares non-professional work out of sheer good will, you simply don't respond by calling it shit and talking about how you expected a professional quality work instead. That's just not something that you do. It says a lot that people accept this sort of behavior here. I for one will never stop calling it out when I see it.
Did anyone here use Genera on an original lisp machine? It had a pseudo-graphical interface and a directory listing provided clickable results. It would be really neat if we could use escaping to confer more information to the terminal about what a particular piece of text means.
Feature-request: bring back clickable ls results!
Bonus points for defining a new term type and standard for this.
This is nice, but a poor substitute for what Genera was doing.
You see, Genera knows the actual type of everything that is clickable. When a program needs an input, objects of the wrong type _lose their interactivity_ for the duration. So if you list the files in some directory, the names of those files are indeed links that you can click on. Clicking on one would bring up a context menu of relevant actions (view, edit, print, delete, etc). If a program asks for a filename as input then clicking on a file instead supplies the file object to the program. Clicking on objects of other types does nothing.
I have this side-project fantasy of a very simple terminal pipe-types project. The basic idea is a set of very basic standardized types, demarcated using escape sequences. Dates, filenames, URLs, numbers, possibly one or two number units as well (time periods, file sizes only).
Tools that already produce columnar data (ls) get a flag that lets them output this format, and tools that work with piped data (cut, sort, uniq) get equivalents or modes that let them easily work with this.
Essentially, simple typed tables held in text, with enhancements for existing tooling to know how to deal with it. Would make my day-to-day on the command line much easier.
But note that on the Lisp Machine/Genera, every type has a presentation and can be “printed” to the REPL. This includes any new classes that you create as part of your own programs. It’s not just a small list of standard types, but every type.
The standard tutorial for the system is to implement Conway’s Game of Life. It has you create a class to hold the game board and then guides you through the process of defining a presentation for it so that the it can be displayed easily.
I think PowerShell works this way essentially. As I understand, all data is structured which makes formatting and piping to other programs much simpler.
...glom on to this: "+JSONSchema" with some sort of UNIX-ish taxonomy. Everything from `man test`, add in `man du`, `date`, `... ago` (relative time) as you'd mentioned.
`jc ls | add_schema...` => `jq ...`
...or `jc ls --with-schema | jq ...`
(it appears as though `jc` already supports schema's, so perhaps it'd be `jc ls --with-types` or something, but there's your starting point!)
That's neat and a similar idea. I think JSON probably ends up being too expressive (not just an array of identically-shaped shallow objects), too restrictive (too few useful primitives), and also too verbose of a format, but the idea of a wrapping command like that as a starting point is neat
"prefer shallow arrays of 'records', possibly with a deeply nested 'uri'-style identifier"
...the clutch result is: "it can be loaded into a database and treated as a table".
The origin of this technique for me was someone saying back in 2000'ish timeframe (and effectively modernized here):
sqlite-utils insert example.db ls_part <( jc ls -lart )
sqlite3 example.db --json \
"SELECT COUNT(*) AS c, flags FROM ls_lart GROUP BY flags"
[
{
"c": 9,
"flags": "-rw-r--r--"
},
{
"c": 2,
"flags": "drwxr-xr-x"
}
]
...this is a 'trivial' example, but it puts a really fine point on the capabilities it unlocks. You're not restricted to building a single pipeline, you can use full relational queries (eg: `... WHERE date > ...`, `... LEFT JOIN files ON git_status...`), you can refer to things by column names rather than weird regexes or `awk` scripts.
This particular example is "dumb" (but ayyyy, I didn't get a UUOC cat award!) in that you can easily muddle through it in different (existing pipeline) ways, but SQL crushes the primitive POSIX relationship tooling (so old, ugly, and unused they're tough to find!), eg: `comm`, `paste`, `uniq`, `awk`
> Feature-request: bring back clickable ls results!
Doesn't your desktop (or distro) have a graphical file manager? On KDE it's Dolphin, which ex-Windows users absolutely love. I don't know what it would be on Gnome or other desktops.
On my system it uses twice as much CPU as plain old ls in a directory with just 13k files. To recursively list a directory with 500k leaf files, lla needs > 10x as much CPU. Apparently it is both slower and with higher complexity.
Will definitely prioritize optimization in the next releases. Planning to benchmark against ls on various systems and file counts to get this properly sorted.
One slept on filesystem cli tool on linux is `gio`. So it comes with glib2. But today glib2 is a dependency of vte, polkit, pipewire, ffmpeg, the entire gtk ecosystem,... you get the point. So you can basically depend on it being there on most linux installs, especially desktop.
I would say it does; those tools rarely reimplement the functions you mention, but are abstractions on top of existing CLIs or libraries that do follow the UNIX philosophy.
This project in particular is not being sold as a drop-in replacement for ls.
Other than colorization, what are people getting out of ls replacements like this? I've recently started using ranger which might replace my ls usage for the most part since it not only shows everything in the directory but has vim like shortcuts for filtering, sorting, and searching the directory as well as previewing files and entering other directories
Hi, author of `pls`[1] here. `pls` goes above and beyond what is typically possible with `ls` without going so far as to become an entire TUI file explorer like Broot[2].
Among a few things it does that `ls` (and other alternatives like `eza` don't do) are:
- icons (SVG icons in terminals that support it, Nerd Fonts otherwise)
- advanced filtering using regex
- advanced sorting across multiple sort bases
- styles and colors using customisable rules
For someone wanting to make the output of `ls` prettier (with a few extra bells and whistles) without having to relearn a new workflow, something like an `ls` replacement makes more sense.
If you run `dircolors --print-database|less` you will see that GNU ls only highlights/colors the path/filenames according to a simplistic scheme where a file can only resolve to one type even though on many terminals today "foreground overlays background overlays bold/italic/etc". (https://github.com/c-blake/lc#vector-typemulti-dimensionalit... has a more advanced idea.)
This tool by triyanox -- just from the screen shot if you click through -- will also colorize permission masks and sizes, dates, user & group.
You could probably embed raw ANSI SGR color escape sequences { maybe from $(tput) if your terminal might be weird } inside a TIME_STYLE=+FORMAT to colorize the times.
In `lc`, mentioned a bit this thread, you can actually color the age like a "heat map" if you want. I.e. more recent times are more toward the red side of the rainbow and older ages toward the other "cooler" side ("cold storage"). Or whatever color scheme you like. So, if you know you're looking for something recent, the color pops out at you. If you like that kind of thing.
While you've specifically labeled this as "personal use", it is a commendable project that introduces some interesting new ideas. I might steal some ideas from it for my own `ls` alternative, `pls`[1].
`lc` mentioned elsethread [1] was always extensible with plugins for formatting and file-typing (but also always supported libmagic-based file-typology). There are other fairly distinctive ideas in `lc`, actually.. the README has a list.
While I like it and it's a good idea, I think the reality is that developers capable enough to write shared library/DLL plugins are more likely to just submit PRs and make such stuff built-in but maybe optional.
I use git command line interface. Not because it is good (it isn't) or because I enjoy suffering (I think I don't), but because it is a standard on all the machines that have, you know, git.
What good is a ls alternative if I need to install it everywhere I need ls? I'd prefer using the standard ls even if it is not ideal. But maybe that's just me.
This is also one of the reasons I write C++ with vim without any auto-completion nor fancy plugins (I do use syntax highlighting though, but I think it comes by default with vim nowadays), as well as using GNU screen -- not every machines install tmux by default, surprisingly. In case I need to login into some random Linux box, I'm sure I'll be almost as productive as I am on my own machine.
You mean, you're almost as unproductive on your development machine as on a random remote system that has no tools. And you somehow regard this as some sort of playing field leveling that generates an advantage.
Imagine a car mechanic that won't use a big hydraulic lift that hoists a car in seconds and lets him walk under it, claiming that by using a manually cranked portable jack, he can be almost as productive when fixing something by the roadside with emergency equipment as he is in his garage.
If you ever meet such a mechanic you can be sure that he programs computers as a hobby.
I assume this is tongue-in-cheek, but I don't think the comparison works at all.
I spend maybe 1% of my working hours (being generous) using `ls` and something like 50% (likely more) using my editor.
If there is some alternative to `ls` that makes my `ls` workflows 2x faster, my productivity increases by 0.5%. If I use a sub-optimal editor that makes my workflow 2x slower, I lose 25% of my productivity.
When I need to login to a remote box, I am also very likely to need to use `ls` since I am less familiar than on my own machine, whereas I am unlikely to do any sort of heavy development work (typically I just need to edit a couple configuration files, or do some git operations).
I developed on SCO (and, later, Unixware) on a PC, all of the clients were running the gamut of Unix OSes: HPUX, DGUX, AIX, SunOS, you name it.
Most of the time was spent on our box in the office, but I was constantly bouncing back and forth to client systems. Either on site, or over the modem. Having to juggle Termcaps and the whole thing. It was polyglot machine/OS world back then.
Just had to learn to get the best out of a baseline set of Unix tools. vi instead of emacs, awk instead of perl. Master those and never be left wanting on a new environment, so I can hit the ground running. No need to "bootstrap" (if the client would even let you, not always). Couldn't even rely on a C compiler.
I’ve been on machines in the last few years that didn’t have screen either. Maybe it was a minimal install or something, but I specifically remember having to install it to get some long running stuff going.
(Thinking it was Ubuntu server, but guessing someone will correct me)
Tmux vs screen is an odd one; it kinda feels like screen was included in the era when people were actually trying to make the default install on servers kind of nice to use with a functional set of assumed programs. And now, it is fairly widespread just due to legacy.
Nowadays, and possibly for the better (every line of code is a potential bug and every bug is a potential vulnerability) it seems like systems don’t want to include this sort of stuff. So, I’m sure if the decision were made today, tmux or screen, tmux would win. Unfortunately, “none” seems like the real future option…
Ubiquitousness is certainly a major selling point. The GNU coreutils are everywhere. I've made my peace with bash and make because I know they're always gonna be there.
This doesn't mean there's no value in developing one's own tools. Contributing to other projects can be quite difficult and time consuning. GNU projects are even more so.
We shouldn't limit ourselves to POSIX stuff either. Better software and tools can and should be built. Every attempt is valuable. And who knows? It might just turn into a staple of Linux distributions some day.
What's the point of suffering everywhere if you don't enjoy it? It's not like using a better alternative prevents you from knowing how to use ls, but only in those cases where there is no better alternative
Categorization and hashes seem to be good ideas, yet you could do all of these with other tools already.
You could be knowing the tool 'exa', a similar ls alternative. Just wanted to mention.
Coloring files of the same file-type is my favorite feature. Is the extension used to group them or a MIME-header parser? I guess the extension, since it is faster.
You can guess it is written in Rust before even checking the repo whenever you see that somebody made a clone of some popular systems tool like top, ls, cd, etc.
I’ve tried a few of these, but most of them seem to be following the trend of folding other shell functionality into one tool. Searching for contents (find + grep -H, or ripgrep), filtering (grep), sorting (ls does it natively, or you can use sort, sort -h for sorting human readable sizes), the list goes on and on.
I guess this is a mini lament that many of these tools are moving away from the Unix philosophy of do one thing well, and make it easy to chain.
And a last very small lament that BeOS didn’t succeed, and their filesystem-as-a-database approach didn’t become more standard.
You can still chain ripgrep. I specifically designed it so that you can chain it just like you would a normal grep.
It does indeed also include other functionality that might traditionally be left to other tools (like filtering files). But this is nothing that GNU grep wasn't already doing itself anyway.
IMO, it's better to view the Unix philosophy as a means to an end and not an end to itself. And IMO, it's important to weigh the benefits of coupling to the user experience.
>view the Unix philosophy as a means to an end and not an end to itself
it won't be a means to an end any more if you don't preserve it, so not breaking that aspect of it has to be one of your ends. if you use it to take ls to a new place but that place is not within the ecosystem, it will be an evolutionary dead end, or worse, the first meteor in the meteor storm that ends all life.
current/traditional unix may not be the be-all/end-all, but replacing it/changing it requires viewing it comprehensively and changing all the tools at once or having a plan to. A good example of this is Plan9
it is an end to itself. the reason it's a means to an end is because that was its end goal. in being a means to an end, it is an end (its end) unto itself, opposite to what you said, imho
I still can't parse what you're saying. The Unix philosophy is a means to an end, where the ultimate end is improved user experience. The means is de-coupling and composition. But there are other means to improving the user experience.
> in being a means to an end, it is an end (its end) unto itself
This either makes zero sense or is vacuously true and clearly not in conflict with what I'm saying.
I think ripgrep specifically is counted in the comment you reply to as a tool that _does_ do one thing well, and that one should use it (or grep) in combination with an ls, instead of giving ls filtering abilities.
I suppose. But I wanted to point out that ripgrep couples functionality, specifically in contradiction to the Unix philosophy. And actually, many command, including "traditional" tooling, so as well.
The point is that many pay lip service to the Unix philosophy as if it were an end. But it isn't.
> You can still chain ripgrep. I specifically designed it so that you can chain it just like you would a normal grep.
Headings on when isatty and off when piping the output put me off when I first tried ripgrep. I don't expect the tools to change their output format on me.
Luckily, you made this behavior configurable, so I'm a happy convert now.
Yes. The columns. The point is that commands have been changing their output format, not just their colors, based on tty for ages. So the criticism you lodge against ripgrep also applies to some of the most core commands you probably use daily.
I would be quite surprised if you didn't rely on this without even knowing it. Even a simple `ls | wc -l` relies on it.
I say this because it's tiring to see folks lament about this feature in ripgrep as if it's something new that ripgrep does. It's not. It's a well established idiom among Unix command line tools.
They don't do one thing well since it's all text, not structured data, which makes chained analysis a challenge, which leads to the desire for integration
ls is tabular data, and you can format it (ls -1, ls -l, ls -w, plus sorting, field formatting, and more), and you can cut/parse/format in a standard way. Every field sans the filename is fixed length, can be handled with awk/cut/sed according your daily mood and requirements, etc. etc.
So, ls can be chained very nicely, which I do every day, even without thinking.
You don't need to have a "structured data with fields" to parse it. You just need to think it like a tabular data with line/column numbers (ls -l, etc.) or just line numbers (ls -1).
So, as long as ls does one thing well, it's alright.
Ah, some of the "enhanced" ls tools can't distinguish between pipe and a terminal, and always print color/format escape codes to pipe too, doubling the fun of using them. So, thanks, I'll stick with my standard ls. That one works.
> You don't need to have a "structured data with fields" to parse it.
You do if you want to have nice things like being able to format your output without having to worry about breaking the dumb tools down the pipe, which can't sort the numbers they don't see:
- 2.1K (this isn't the same as the second)
- 2.1K
- 2.1M
Also, why do I need to count columns like a cave man in 'sort -k 5' instead of doing the obvious "sort by size"?
> print color/format escape codes to pipe too
A problem that would disappear with... structured data!
Then you sort at the point you can see the numbers and discard them later.
> Also, why do I need to count columns like a cave man in 'sort -k 5' instead of doing the obvious "sort by size"
awk can sort the columns for you. Plus, ls can already sort by size. Try "ls -lS " for biggest file first, or "ls -lSr" for smallest file first. Add "-h" to make human readable.
> A problem that would disappear with... structured data!
No. A problem that would disappear with "a small if block which asks which environment I'm in". If you're in a shell "-t" test in sh/bash will tell you that. If you're coding a tool, there are standard ways to do that (via termcap IIRC). Standard UNIX tools are doing this for decades now.
IOW, structured data is not a cure for laziness...
> Then you sort at the point you can see the numbers and discard them later
This sort of human overhead is only needed to compensate for the deficiencies of the data structures
> ls can already sort by size
That's the benefit of integration you're arguing against with your deficient piping suggestions
> IOW, structured data is not a cure for laziness...
It is precisely what good design is for - it reduces the need for various dumb workarounds that bad design requires, which means you can be more lazy and avoid said workaround
> Yes, because their authors are not that lazy.
This just ignores the argument, which was "some better new tools don't do that" isn't relevant when some better new tools also do that
A lot of this post hinges on the fact that newlines in filenames were legal, and that people wrote shell without handling quoting correctly. While quoting (as well as ls altering filenames) is still an issue, find -print0, read -d '', and similar are no longer neccessary. Newlines are now forbidden in filenames: https://blog.toast.cafe/posix2024-xcu
> A bunch of C functions are now encouraged to report EILSEQ if the last component of a pathname to a file they are to create contains a newline
This, yes, makes newlines in filenames effectively illegal on operating systems strictly conforming to the new POSIX standard. However, older systems will not be enforcing this and any operating system which exposes a syscall interface that does not require libc (such as Linux) is also not required to emit any errors. The only time even in the future that you should NOT worry about handling the newline case is on filesystems where it's is expressly forbidden, such as NTFS.
Most utilities that create files are encouraged to error on newline filenames, which makes this effective illegality stronger. The post also discusses the future of this encouragement, which is turning it into a requirement.
> However, older systems will not be enforcing this
Eventually, newlines in filenames will go the way of /usr/xpg4/bin/sh.
I'd like to note that up until this point, there hasn't (and isn't) been a fully POSIX compliant way to do many shell operations on newline containing filenames. They are already effectively unsupported, and the standard that adds support also discourages them from being created and used. The best way to handle them up until this point has been to not use sh(1).
In past, there have been Linux-based operating systems that have been certified as Single Unix Specification compliant, and part of said specification is POSIX. I would imagine GNU and Busybox and Musl will be willing to implement the changes proposed by POSIX 2024, which inevitably leads down the road of newlines being banned.
Tbh, i dont understand why people want to rewrite ls of all things.
Like don't get me wrong, if they had fun, that's great.
But all i use ls for is getting a list of files. I barely ever even use the -la options. There just doesn't seem like a lot of room for improvement in something so simple.
Hi, author of `pls`[1] here. I started `pls` as a hobby project to scratch a personal itch: a "prettier" alternative to `ls`, with more colors and customisable icons. I also wanted to learn Rust as a secondary motivation.
But as I added more and more features to it, it has become a good tool that does a number of things that `ls` doesn't do (unless you chain it with other tools like `sort` or `grep`) and even other `ls` replacements don't do.
So even though `ls` is fantastic as-is, it's always fun to build something of your own, add a little more polish in areas that matter to you and put it out there to see if it resonates with more people.
I think the standard ls doesn't have much in terms of color/icons, so its simplicity probably makes it a great side project for improving on.
Not a big surface area, some easy improvements. A whole lot less stressful than rewriting grep (although I'm massively grateful Burnt Sushi did such a crazy thing)
Thanks @benrutter! You nailed it - ls is like the "Hello World" of system tools. Simple enough that you won't tear your hair out, but meaty enough to learn a ton.
Started with "ooh, pretty colors!" and before I knew it I was deep in filesystem APIs and terminal wizardry. Way less scary than tackling grep.
Sometimes the best projects are the ones where you can't mess up too badly... well, unless you accidentally delete everything while testing
That's the first thing I noticed in the options, it has modified date but not create or access date (listing or sorting) that I could tell. Of course it could be added, or I could just use `ls`.
https://github.com/c-blake/lc shows all files, including hidden files (starting with dot aka dot files) by default, suppressible in output with -xdot or a shell/internal alias to the same effect.
It helps to start with a more extensible/less built-in idea of "file type". "odd permissions" are another type that might interest someone, for example, such as "setgid but not group-executable" or "writable but not readable" or etc.
Yes, I know one can also use `find` or etc. for that, but there's no crime in there being >1 way to see things and, for some people, colors can make things really stand out - as can sort order which is another more color-blind possibility in `lc` as well as the simple filter-or-not of ls -a/-A.
Thanks for the great list! Yep, eza and g are fantastic - I actually use eza daily and love how g handles git integration.
What made me excited to experiment with lla was playing with the plugin architecture. While these other tools have great built-in features, I wanted to see if I could make something where the community could easily add their own capabilities without touching the core code. Kind of like how vim and neovim handle plugins.
Got inspired by how people keep building these ls alternatives to scratch their own unique itches. Figured why not make it easier for everyone to scratch their own itch through plugins? Still very much an experiment, but it's been fun seeing what's possible!
I wanted to plug `pls`[1], a tool that I wrote and maintain. It does a few things that `eza` (another great tool nonetheless, and a massive inspiration) cannot do[2].
alias ls="EZA_COLORS='da=36' eza --time-style=relative --color-scale=age"
alias lsa="ls --almost-all" # ignore . ..
alias l="ls --long --classify=always" # show file indicators
alias la="l --almost-all"
# Tree view
alias ltreea="ls --tree"
alias ltree="ltreea --level=2"
# Sort by time or size
alias lt="ls --long --sort=time"
alias lta="lt --almost-all"
# lsd is faster than eza
alias lss="lsd --long --total-size --sort=size --reverse"
alias lssa="lss --almost-all"
lla seems to go beyond what ls should do for some reason. Why show git and code complexity info? Just use tools dedicated for these things, otherwise, it will be an unmaintainable mess. If you can solve a problem easily with external tools, then there's no reason to add a feature for it.
That's a great list. I have a similar list and the aliases grow out of frequently used arguments. For example, I found myself often doing an ls -Altch and so lsth was born. I find that aliases that or born of frequently used arguments are easily remembered. Over time that one grew to include a pipe to head because most of the time I just want to see the top 20 or so most recently modified files in the directory.
That's the amazing part I'm talking about the learning experience you get from weeks of working on something like that is better than reading countless documentations
Oh, of course the development is fun and exciting and a learning experience.
But before inviting others to use something, please think of how to make its use more clear. After all, I assume you post this so that people use it, not only admire your coding skills. There is a group of people who have learned to read and rely on man pages.
For example, the top-level README says:
> -s, --sort <CRITERIA>: Sort by "name", "size", or "date"
OK, does "date" refer to creation date, modification date, access date?
I can understand "size", but does it produce smallest-first or largest-first? It might not matter if... ah, no, there is no -r/--reverse flag.
Can I have more than one "criteria" (since the plural is used)?
Getting answers for such questions now means I have to go read the code in src/args.rs and follow to the implementation of the various functions. And in a few days, when I have the same questions again and I have forgotten the options, I will again have to dive into the code.
Please consider providing a short man page. It documents the "calling interface" to your program and makes it easier to use. I usually start writing one even before implementing the whole thing, to clearly articulate what I expect the program to do.
Fair critique about the documentation - this needs proper attention.
Writing a man page first is a solid approach - it forces clear thinking about the interface before implementation. I'll prioritize adding complete documentation for all options and the plugin system.
The code works, but without good docs it's not truly useful.
While a man page or good documentation is maybe not too intriguing for you I consider it essential for other users to adopt.
Maybe there are new or modern ways to create man pages that can be stimulating for your learning experience?
I notice prior HN comments of yours mention the physical design of the NeXT cube. I cannot say it will make you not hate software, but you still might appreciate that another alternative ls, https://github.com/c-blake/lc, both re-thinks/breaks more radically with ls-tradition and adapts well to something very similar to a terminal variant of the https://en.wikipedia.org/wiki/Miller_columns used in the NeXT file tree graphical browser/navigator via simple shell process substitution composition. E.g., a 3-level scenario on an 80-column looks like:
Some shell script that uses $((COLUMNS)) arithmetic to do 2 or 4 or whatever terminal width is a pretty simple exercise for the reader and one might want to pipe to less.
Thanks for all the feedback! Let me clarify a few things about lla. The most amazing part of this project wasn't just building another ls alternative - it was the incredible learning journey. Building a systems tool in Rust while implementing a plugin architecture taught me more in a few weeks than months of reading could have. Yes, it does more than traditional ls, and that's intentional. The plugin system came from scratching my own itch of constantly switching between different terminal tools. Each feature added was a chance to dive deeper into systems programming and Unix internals. The performance still needs work, and the documentation could be better. But that's the beauty of open source - you ship it, learn from the feedback, and keep improving. Building in public is an incredible way to level up your skills. For anyone considering a similar project: pick a common tool you use daily and try reimagining it. You'll be surprised how much you learn along the way.
Thank you for being one of the few projects replacing a POSIX tool which properly sets the expectation that it's for personal use. It causes me no end of consternation that I see many tools introduced which provide only the barest minimum of functionality and skip over extended attributes, ACLs, and fail to keep compatibility with flags, or don't properly separate STDOUT & STDERR.
While these may be sufficient for a naive developer, this oversight then breaks many downstream tools.
Again though, thanks for sharing. Bringing your own spin and ideas into the world can be anxiety inducing and I'm pleased you went about this in a helpful and measured way!
Would you mind listing more common mistakes made by CLI developers?
Julia Evans had an interesting thread recently on “social rules” of the terminal: https://social.jvns.ca/@b0rk/113540676612640547
This is a good, open-source resource for guidelines on creating CLIs, which goes over some common mistakes.
https://clig.dev/
These days: not building this such that they can be easily spit out as json and/or xml markup.
Not obeying the --help flag.
not behaving the same as robust cli tools. -h and --help and -v and --verbose and --version
If this is causing you any "consternation" at all, it means you expect too much from unpaid free software developers. The repository doesn't even have a sponsor link.
The software is provided as is, in the hope it will be useful, but without any warranty whatsoever.
All free and open source software licenses contain some version of the above statement.
All of this is implicitly for personal use. In the sense that it's not a product, just something people made because they needed a problem solved.
Honestly, what's the point of comments like this? No shit, it's done for a personal hobby, you're not breaking new ground with that idea.
However, this is a website of opinions, and gp's opinion is valid, because this forum is where opinions go. It's not as though gp said to stop doing this project.
This pedantic finger wagging is just so rote.
For what it's worth, I think that both the parent and the grandparent are valid opinions.
One says that open source projects should clearly state when they are not meant as a serious replacement for standard tools. The other says that they disagree and that open source projects don't have to give any warning.
I guess I am a little in-between: if you open source your code, I don't think you have anything to do (it's already nice to put an open source license on it). If you advertise your open source tool (e.g. on this website), then it is polite to set expectations.
The point is expressing my opinion as a fellow free software developer. Mine is just as valid as yours or theirs. And I didn't say their opinions were invalid to begin with.
These hidden "expectations" that people seem to have regarding free and open source software can be incredibly demoralizing. It's something I wish would change. That's why I commented on it.
> These hidden "expectations" that people seem to have regarding free and open source software
Taking this to a logical conclusion; If a plumber/lawyer/<professional> offers services for free, and those services end up killing, or massively damaging someone, can they just say the same thing to absolve themselves of all liability?
I also wish to change things, in the opposite direction. FOSS devs should explicitly mark things as not-for-prod; rather than pushing things as prod-ready when they aren't. I think some kind of change will come upon FOSS in the future so people can rely on it, and sadly I think that change will be adoption by corporates (w/ legal budgets) rather than the FOSS devs/ORGs themselves becoming more mature.
> If a plumber/lawyer/<professional> offers services for free, and those services end up killing, or massively damaging someone, can they just say the same thing to absolve themselves of all liability?
There is a world of difference between what professionals such as doctors do and what free and open source developers do. It's not even remotely the same. I know because I happen to be both.
And even if they were in any way comparable, professionals get paid handsomely precisely because of the liability and responsibility. If people want this out of free software developers, they should start paying them some serious money.
> I also wish to change things, in the opposite direction.
If you want this, hire a professional to do it for you instead of pushing unwanted responsibility and liability onto the rest of us. I've got more than enough of that at my actual job and I absolutely do not want it in my free software development hobby. Adding liability to free software will kill it.
> FOSS devs should explicitly mark things as not-for-prod; rather than pushing things as prod-ready when they aren't.
It already is. Everything under a free or open source software license is already marked as such. The license says so. You use it at your own risk. Up to you to determine if that's good enough for you to use in production.
> professionals such as doctors do and what free and open source developers do
You are right - there is less in the way of personal liability in the case of devs (but for the odd PII here and there), Precisely why I think there will be a disruption coming.
> professionals get paid handsomely precisely because of the liability and responsibility
Devs are, or can also be, well-paid 'professionals'. And all are still capable of free, pro-bono work.
> instead of pushing unwanted responsibility and liability onto the rest of us
I'm not sure what you think I said..
"rather than pushing things as prod-ready when they aren't"
Is wrt the promotion of something as ready-for-production.
I'm not addressing the legal status as dictated by the (unproven) licence, which isn't relevant wrt liability anyway.
[flagged]
Indeed. Free shit is still... shit.
It's "shit" to you. To its creators, it's an awesome piece of software that does exactly what they wanted it to do and is as simple as they wanted it to be. They liked it enough that they thought it was worth sharing here. Maybe they don't need or care about extended attributes, and that's fine.
If you want people to work on things they don't care about, you should consider hiring them or sponsoring their work. Either way, remember to thank them for their generosity. They did give you the source code and the freedom to use and modify it, after all. Copyright law says people are entitled to neither.
If you won't pay them to get them to work on the features you want, it's entirely within your power to do it yourself. That freedom is not the default, it is a privilege that is given. People should be thankful for it.
Even if the code sucked, you have the freedom to read it and learn from its mistakes and see what it does so you can make a better version. And the world is richer and wealthier because of it.
I totally see you adopting a sick dog/cat with all that extra emotions you have.
Save a puppy! You can even brag on the social networks for some virtue signalling and karma points as bonus.
I've saved plenty of human lives. I don't brag about it either. I rarely mention what I do for a living here.
This is not about virtue signaling. It's about business. You are not paying us. Quit demanding free work from us, and keep those expectations in check.
It's also about basic respect and manners. Someone shares non-professional work out of sheer good will, you simply don't respond by calling it shit and talking about how you expected a professional quality work instead. That's just not something that you do. It says a lot that people accept this sort of behavior here. I for one will never stop calling it out when I see it.
Did anyone here use Genera on an original lisp machine? It had a pseudo-graphical interface and a directory listing provided clickable results. It would be really neat if we could use escaping to confer more information to the terminal about what a particular piece of text means.
Feature-request: bring back clickable ls results!
Bonus points for defining a new term type and standard for this.
There's already `ls --hyperlink` for clickable results, but that depends on your terminal supporting the URL escape sequence.
This is nice, but a poor substitute for what Genera was doing.
You see, Genera knows the actual type of everything that is clickable. When a program needs an input, objects of the wrong type _lose their interactivity_ for the duration. So if you list the files in some directory, the names of those files are indeed links that you can click on. Clicking on one would bring up a context menu of relevant actions (view, edit, print, delete, etc). If a program asks for a filename as input then clicking on a file instead supplies the file object to the program. Clicking on objects of other types does nothing.
> Genera knows the actual type of everything
I have this side-project fantasy of a very simple terminal pipe-types project. The basic idea is a set of very basic standardized types, demarcated using escape sequences. Dates, filenames, URLs, numbers, possibly one or two number units as well (time periods, file sizes only).
Tools that already produce columnar data (ls) get a flag that lets them output this format, and tools that work with piped data (cut, sort, uniq) get equivalents or modes that let them easily work with this.
Essentially, simple typed tables held in text, with enhancements for existing tooling to know how to deal with it. Would make my day-to-day on the command line much easier.
Could be fun :)
But note that on the Lisp Machine/Genera, every type has a presentation and can be “printed” to the REPL. This includes any new classes that you create as part of your own programs. It’s not just a small list of standard types, but every type.
The standard tutorial for the system is to implement Conway’s Game of Life. It has you create a class to hold the game board and then guides you through the process of defining a presentation for it so that the it can be displayed easily.
I think PowerShell works this way essentially. As I understand, all data is structured which makes formatting and piping to other programs much simpler.
Arcan is experimenting with something like this (among others): https://arcan-fe.com/2024/09/16/a-spreadsheet-and-a-debugger...
See also:
* NuShell (https://www.nushell.sh/)
nushell goes in that direction. Programs can output tables, and the shell (or other tools) know how to work with this structured data.
I always thought to do that by having a virtual file system that tags my files and so they are available at specific location if they fit the bill.
https://kellyjonbrazil.github.io/jc/docs/parsers/ls.html
...glom on to this: "+JSONSchema" with some sort of UNIX-ish taxonomy. Everything from `man test`, add in `man du`, `date`, `... ago` (relative time) as you'd mentioned.
`jc ls | add_schema...` => `jq ...`
...or `jc ls --with-schema | jq ...`
(it appears as though `jc` already supports schema's, so perhaps it'd be `jc ls --with-types` or something, but there's your starting point!)
That's neat and a similar idea. I think JSON probably ends up being too expressive (not just an array of identically-shaped shallow objects), too restrictive (too few useful primitives), and also too verbose of a format, but the idea of a wrapping command like that as a starting point is neat
I'll share this comment from 7 months ago with you:
https://news.ycombinator.com/item?id=40100069
"prefer shallow arrays of 'records', possibly with a deeply nested 'uri'-style identifier"
...the clutch result is: "it can be loaded into a database and treated as a table".
The origin of this technique for me was someone saying back in 2000'ish timeframe (and effectively modernized here):
...this is a 'trivial' example, but it puts a really fine point on the capabilities it unlocks. You're not restricted to building a single pipeline, you can use full relational queries (eg: `... WHERE date > ...`, `... LEFT JOIN files ON git_status...`), you can refer to things by column names rather than weird regexes or `awk` scripts.This particular example is "dumb" (but ayyyy, I didn't get a UUOC cat award!) in that you can easily muddle through it in different (existing pipeline) ways, but SQL crushes the primitive POSIX relationship tooling (so old, ugly, and unused they're tough to find!), eg: `comm`, `paste`, `uniq`, `awk`
Tab completion has developed some similar features. I've seen shells that will only autocomplete what seem to be appropriate choices.
I typically turn this off. Many times it's too slow, and many times it hides local filenames, and I do want local filenames.
That's one aspect I prefer in playing with TempleOS over Linux. The rest of the command line is a bit of a pain, with no history, C-as-a-shell, etc.
Maybe some aspects of the Plan9 UI? (rio/9term, plumber; acme as well).
You should be able to get this to work on Unix with plan9port.
I'm not going to speak for Linux, but on Mac the Finder is annoying enough that I ended up using CLI for file manipulation (ranger).
My ssh client also supports mouse events, though.
It's not really that, but have you tried ranger?
Sounds like a fun project. However, from the readme:
Efficient file listing: Optimized for speed, even in large directories
What exactly is it doing differently to optimize for speed? Isn't it just using the regular fs lib?
On my system it uses twice as much CPU as plain old ls in a directory with just 13k files. To recursively list a directory with 500k leaf files, lla needs > 10x as much CPU. Apparently it is both slower and with higher complexity.
On the latest release the it can list a tree of 100 in depth with over 100k files in less than 100ms and if cached 40ms
Will definitely prioritize optimization in the next releases. Planning to benchmark against ls on various systems and file counts to get this properly sorted.
Not trying to “gotcha” you, but I would imagine that 10x the CPU of ls is still very little, or am I wrong?
In the case of the 500k tree, `lla` needs 2.5 seconds, so it's pretty substantial.
Is listing a lot of files really CPU-limited? Isn’t the problem IO speed?
What exactly makes ls faster?
But it’s written in rust so it’s super fast. Did you take that into account when running your benchmarks? /s
One slept on filesystem cli tool on linux is `gio`. So it comes with glib2. But today glib2 is a dependency of vte, polkit, pipewire, ffmpeg, the entire gtk ecosystem,... you get the point. So you can basically depend on it being there on most linux installs, especially desktop.
Checkout the man page: https://www.mankier.com/1/gio
highlights:
- showing progress in `cp` equivalent
- Easy cli interface to freedesktop trash (!)
- tree command
- filesystem changes monitor (inotify wrapper)
All of what is in the gio command used to be the gvfs-* command set.
I had no idea gio could do all those things. I've been using it to mount my smartphone from the CLI.
I clicked on this (without noting "github") expecting an essay on the joys of building an alternative to ls.
This is basically a Show HN without a summary I think.
fwiw:
https://news.ycombinator.com/showhn.html
Does UNIX philosophy holds anymore? Most of the modern CLI tools I've seen here try to be all at once: file manager, git client, grep.
I wonder if it was always like this or we're getting further and further from the idea of keeping programs simple and open.
I would say it does; those tools rarely reimplement the functions you mention, but are abstractions on top of existing CLIs or libraries that do follow the UNIX philosophy.
This project in particular is not being sold as a drop-in replacement for ls.
Other than colorization, what are people getting out of ls replacements like this? I've recently started using ranger which might replace my ls usage for the most part since it not only shows everything in the directory but has vim like shortcuts for filtering, sorting, and searching the directory as well as previewing files and entering other directories
Hi, author of `pls`[1] here. `pls` goes above and beyond what is typically possible with `ls` without going so far as to become an entire TUI file explorer like Broot[2].
Among a few things it does that `ls` (and other alternatives like `eza` don't do) are: - icons (SVG icons in terminals that support it, Nerd Fonts otherwise) - advanced filtering using regex - advanced sorting across multiple sort bases - styles and colors using customisable rules
For someone wanting to make the output of `ls` prettier (with a few extra bells and whistles) without having to relearn a new workflow, something like an `ls` replacement makes more sense.
[1]: https://pls.cli.rs [2]: https://dystroy.org/broot/
pls looks useful and I will retain it but eza is giving more icons for more things via (this is my alias for `l`, basically)
`eza --long --hyperlink --header --all --icons --git --sort name`
also the hyperlink thing is useful
ls does colored output. I'm surprised it's not the default for you.
If you run `dircolors --print-database|less` you will see that GNU ls only highlights/colors the path/filenames according to a simplistic scheme where a file can only resolve to one type even though on many terminals today "foreground overlays background overlays bold/italic/etc". (https://github.com/c-blake/lc#vector-typemulti-dimensionalit... has a more advanced idea.)
This tool by triyanox -- just from the screen shot if you click through -- will also colorize permission masks and sizes, dates, user & group.
I managed to scroll past the screenshot twice (now and earlier) before it had loaded.
Two settings for ls make some of the colouring less useful to me.
BLOCK_SIZE='1 formats sizes in bytes with comma separators. TIME_STYLE=long-iso formats the dates sensibly.
This means entries line up in neater columns.
You could probably embed raw ANSI SGR color escape sequences { maybe from $(tput) if your terminal might be weird } inside a TIME_STYLE=+FORMAT to colorize the times.
In `lc`, mentioned a bit this thread, you can actually color the age like a "heat map" if you want. I.e. more recent times are more toward the red side of the rainbow and older ages toward the other "cooler" side ("cold storage"). Or whatever color scheme you like. So, if you know you're looking for something recent, the color pops out at you. If you like that kind of thing.
[dead]
This github page doesn't say anything about why it turned out to be amazing, seems like a fun side project.
Yeah, talk about hiding the headline...
I see a screenshot that looks like the output of ls, ok it has colors, and some filenames have "!!" behind it. Great success?
Haha! Aren't all rust rewrites about colors take `bat` for example! Btw "!!" are from the git plugin, a quick way to see my workspace git status
Yeah, why use this instead of ls? What makes it worthwhile as a daily driver?
While you've specifically labeled this as "personal use", it is a commendable project that introduces some interesting new ideas. I might steal some ideas from it for my own `ls` alternative, `pls`[1].
[1]: https://pls.cli.rs
"pls" as in "please give me a list of files"? Does `sudo pls` negate the "please"? :p
Excellent new idea re plugins, a lot of these tools are too inflexible !
`lc` mentioned elsethread [1] was always extensible with plugins for formatting and file-typing (but also always supported libmagic-based file-typology). There are other fairly distinctive ideas in `lc`, actually.. the README has a list.
While I like it and it's a good idea, I think the reality is that developers capable enough to write shared library/DLL plugins are more likely to just submit PRs and make such stuff built-in but maybe optional.
[1] https://news.ycombinator.com/item?id=42229841
"Always" is just 4 years? Lc is also one of these new tools
> more likely to just submit PRs and make such stuff built-in but maybe optional.
Which are more likely to just be rejected by the more conservative maintainers of the tool. That's the empowering beauty of plugins - no such barriers
Your tone is rather disputatious/critical, but we have literally no dispute here.
I use git command line interface. Not because it is good (it isn't) or because I enjoy suffering (I think I don't), but because it is a standard on all the machines that have, you know, git.
What good is a ls alternative if I need to install it everywhere I need ls? I'd prefer using the standard ls even if it is not ideal. But maybe that's just me.
This is also one of the reasons I write C++ with vim without any auto-completion nor fancy plugins (I do use syntax highlighting though, but I think it comes by default with vim nowadays), as well as using GNU screen -- not every machines install tmux by default, surprisingly. In case I need to login into some random Linux box, I'm sure I'll be almost as productive as I am on my own machine.
You mean, you're almost as unproductive on your development machine as on a random remote system that has no tools. And you somehow regard this as some sort of playing field leveling that generates an advantage.
Imagine a car mechanic that won't use a big hydraulic lift that hoists a car in seconds and lets him walk under it, claiming that by using a manually cranked portable jack, he can be almost as productive when fixing something by the roadside with emergency equipment as he is in his garage.
If you ever meet such a mechanic you can be sure that he programs computers as a hobby.
I assume this is tongue-in-cheek, but I don't think the comparison works at all.
I spend maybe 1% of my working hours (being generous) using `ls` and something like 50% (likely more) using my editor.
If there is some alternative to `ls` that makes my `ls` workflows 2x faster, my productivity increases by 0.5%. If I use a sub-optimal editor that makes my workflow 2x slower, I lose 25% of my productivity.
When I need to login to a remote box, I am also very likely to need to use `ls` since I am less familiar than on my own machine, whereas I am unlikely to do any sort of heavy development work (typically I just need to edit a couple configuration files, or do some git operations).
I did the same thing back in they day.
I developed on SCO (and, later, Unixware) on a PC, all of the clients were running the gamut of Unix OSes: HPUX, DGUX, AIX, SunOS, you name it.
Most of the time was spent on our box in the office, but I was constantly bouncing back and forth to client systems. Either on site, or over the modem. Having to juggle Termcaps and the whole thing. It was polyglot machine/OS world back then.
Just had to learn to get the best out of a baseline set of Unix tools. vi instead of emacs, awk instead of perl. Master those and never be left wanting on a new environment, so I can hit the ground running. No need to "bootstrap" (if the client would even let you, not always). Couldn't even rely on a C compiler.
I’ve been on machines in the last few years that didn’t have screen either. Maybe it was a minimal install or something, but I specifically remember having to install it to get some long running stuff going.
(Thinking it was Ubuntu server, but guessing someone will correct me)
Tmux vs screen is an odd one; it kinda feels like screen was included in the era when people were actually trying to make the default install on servers kind of nice to use with a functional set of assumed programs. And now, it is fairly widespread just due to legacy.
Nowadays, and possibly for the better (every line of code is a potential bug and every bug is a potential vulnerability) it seems like systems don’t want to include this sort of stuff. So, I’m sure if the decision were made today, tmux or screen, tmux would win. Unfortunately, “none” seems like the real future option…
Ubiquitousness is certainly a major selling point. The GNU coreutils are everywhere. I've made my peace with bash and make because I know they're always gonna be there.
This doesn't mean there's no value in developing one's own tools. Contributing to other projects can be quite difficult and time consuning. GNU projects are even more so.
We shouldn't limit ourselves to POSIX stuff either. Better software and tools can and should be built. Every attempt is valuable. And who knows? It might just turn into a staple of Linux distributions some day.
Even ls isn't standard on all machines. GNU ls is different from BSD ls.
What's the point of suffering everywhere if you don't enjoy it? It's not like using a better alternative prevents you from knowing how to use ls, but only in those cases where there is no better alternative
Categorization and hashes seem to be good ideas, yet you could do all of these with other tools already. You could be knowing the tool 'exa', a similar ls alternative. Just wanted to mention.
Coloring files of the same file-type is my favorite feature. Is the extension used to group them or a MIME-header parser? I guess the extension, since it is faster.
This is also part of GNU ls, at least.
I think.You are right.
I didn't know that ls was missing plugins.
You can guess it is written in Rust before even checking the repo whenever you see that somebody made a clone of some popular systems tool like top, ls, cd, etc.
I know, right?! It's a common theme.
But recently, there were two submissions here that actually turn the "rewrite in Rust" meme into something substantial.
The two factions of C++ https://news.ycombinator.com/item?id=42231489 On "Safe" C++ https://news.ycombinator.com/item?id=42186475
Be warned that the second one is a really long read!
I, for one, have been wishing for a high-performance, extensible alternative to emacs for a long time.
There seems to be a lot of projects that is now competing to replace ls (for people preferences)
For reference, those are the ones I am familiar with. They are somehow active in contrast to things like exa which is not maintained anymore.
eza: (https://github.com/eza-community/eza)
lsd: (https://github.com/Peltoche/lsd)
colorls: (https://github.com/athityakumar/colorls)
g: (https://github.com/Equationzhao/g)
ls++: (https://github.com/trapd00r/LS_COLORS)
logo-ls: (https://github.com/canta2899/logo-ls) - this is forked because main development stopped 4 years ago.
Any more?
Personally I prefer eza and wrote a zsh plugin that is basically aliases that matches what I have from my muscle memory.
I’ve tried a few of these, but most of them seem to be following the trend of folding other shell functionality into one tool. Searching for contents (find + grep -H, or ripgrep), filtering (grep), sorting (ls does it natively, or you can use sort, sort -h for sorting human readable sizes), the list goes on and on.
I guess this is a mini lament that many of these tools are moving away from the Unix philosophy of do one thing well, and make it easy to chain.
And a last very small lament that BeOS didn’t succeed, and their filesystem-as-a-database approach didn’t become more standard.
You can still chain ripgrep. I specifically designed it so that you can chain it just like you would a normal grep.
It does indeed also include other functionality that might traditionally be left to other tools (like filtering files). But this is nothing that GNU grep wasn't already doing itself anyway.
IMO, it's better to view the Unix philosophy as a means to an end and not an end to itself. And IMO, it's important to weigh the benefits of coupling to the user experience.
>view the Unix philosophy as a means to an end and not an end to itself
it won't be a means to an end any more if you don't preserve it, so not breaking that aspect of it has to be one of your ends. if you use it to take ls to a new place but that place is not within the ecosystem, it will be an evolutionary dead end, or worse, the first meteor in the meteor storm that ends all life.
current/traditional unix may not be the be-all/end-all, but replacing it/changing it requires viewing it comprehensively and changing all the tools at once or having a plan to. A good example of this is Plan9
I don't know what you're trying to say and I don't see how it's in conflict with anything I've said.
>not an end to itself
it is an end to itself. the reason it's a means to an end is because that was its end goal. in being a means to an end, it is an end (its end) unto itself, opposite to what you said, imho
I still can't parse what you're saying. The Unix philosophy is a means to an end, where the ultimate end is improved user experience. The means is de-coupling and composition. But there are other means to improving the user experience.
> in being a means to an end, it is an end (its end) unto itself
This either makes zero sense or is vacuously true and clearly not in conflict with what I'm saying.
I think ripgrep specifically is counted in the comment you reply to as a tool that _does_ do one thing well, and that one should use it (or grep) in combination with an ls, instead of giving ls filtering abilities.
I suppose. But I wanted to point out that ripgrep couples functionality, specifically in contradiction to the Unix philosophy. And actually, many command, including "traditional" tooling, so as well.
The point is that many pay lip service to the Unix philosophy as if it were an end. But it isn't.
> You can still chain ripgrep. I specifically designed it so that you can chain it just like you would a normal grep.
Headings on when isatty and off when piping the output put me off when I first tried ripgrep. I don't expect the tools to change their output format on me.
Luckily, you made this behavior configurable, so I'm a happy convert now.
> I don't expect the tools to change their output format on me.
You probably do! If you've ever used `ls`, then it does exactly this.
If you mean the ANSI color stuff, yes - I do expect these to disappear :)
I meant the "shape" of the output. It just doesn't follow the principle of least surprise.
edit: you probably meant the columns. I forgot about that, I haven't parsed ls(1) output in ages ;)
Yes. The columns. The point is that commands have been changing their output format, not just their colors, based on tty for ages. So the criticism you lodge against ripgrep also applies to some of the most core commands you probably use daily.
I would be quite surprised if you didn't rely on this without even knowing it. Even a simple `ls | wc -l` relies on it.
I say this because it's tiring to see folks lament about this feature in ripgrep as if it's something new that ripgrep does. It's not. It's a well established idiom among Unix command line tools.
Isn’t “don’t parse ls” like the third commandment of Unix?
You've never done `ls | wc -l`?
I've always assumed that ls doesn't change it's output when piped; I've always done ls -1|wc -l. I guess I can save on a few keystrokes now.
They don't do one thing well since it's all text, not structured data, which makes chained analysis a challenge, which leads to the desire for integration
ls is tabular data, and you can format it (ls -1, ls -l, ls -w, plus sorting, field formatting, and more), and you can cut/parse/format in a standard way. Every field sans the filename is fixed length, can be handled with awk/cut/sed according your daily mood and requirements, etc. etc.
So, ls can be chained very nicely, which I do every day, even without thinking.
You don't need to have a "structured data with fields" to parse it. You just need to think it like a tabular data with line/column numbers (ls -l, etc.) or just line numbers (ls -1).
So, as long as ls does one thing well, it's alright.
Ah, some of the "enhanced" ls tools can't distinguish between pipe and a terminal, and always print color/format escape codes to pipe too, doubling the fun of using them. So, thanks, I'll stick with my standard ls. That one works.
> You don't need to have a "structured data with fields" to parse it.
You do if you want to have nice things like being able to format your output without having to worry about breaking the dumb tools down the pipe, which can't sort the numbers they don't see:
- 2.1K (this isn't the same as the second) - 2.1K - 2.1M
Also, why do I need to count columns like a cave man in 'sort -k 5' instead of doing the obvious "sort by size"?
> print color/format escape codes to pipe too
A problem that would disappear with... structured data!
> Ah, some of the "enhanced" ls tools
so use the other "some" that can?
> which can't sort the numbers they don't see
Then you sort at the point you can see the numbers and discard them later.
> Also, why do I need to count columns like a cave man in 'sort -k 5' instead of doing the obvious "sort by size"
awk can sort the columns for you. Plus, ls can already sort by size. Try "ls -lS " for biggest file first, or "ls -lSr" for smallest file first. Add "-h" to make human readable.
> A problem that would disappear with... structured data!
No. A problem that would disappear with "a small if block which asks which environment I'm in". If you're in a shell "-t" test in sh/bash will tell you that. If you're coding a tool, there are standard ways to do that (via termcap IIRC). Standard UNIX tools are doing this for decades now.
IOW, structured data is not a cure for laziness...
> so use the other "some" that can?
Yes, because their authors are not that lazy.
> Then you sort at the point you can see the numbers and discard them later
This sort of human overhead is only needed to compensate for the deficiencies of the data structures
> ls can already sort by size
That's the benefit of integration you're arguing against with your deficient piping suggestions
> IOW, structured data is not a cure for laziness...
It is precisely what good design is for - it reduces the need for various dumb workarounds that bad design requires, which means you can be more lazy and avoid said workaround
> Yes, because their authors are not that lazy.
This just ignores the argument, which was "some better new tools don't do that" isn't relevant when some better new tools also do that
vanilla ls has never been particularly chainable - https://mywiki.wooledge.org/ParsingLs
A lot of this post hinges on the fact that newlines in filenames were legal, and that people wrote shell without handling quoting correctly. While quoting (as well as ls altering filenames) is still an issue, find -print0, read -d '', and similar are no longer neccessary. Newlines are now forbidden in filenames: https://blog.toast.cafe/posix2024-xcu
> Newlines are now forbidden in filenames
No. To quote that article
> A bunch of C functions are now encouraged to report EILSEQ if the last component of a pathname to a file they are to create contains a newline
This, yes, makes newlines in filenames effectively illegal on operating systems strictly conforming to the new POSIX standard. However, older systems will not be enforcing this and any operating system which exposes a syscall interface that does not require libc (such as Linux) is also not required to emit any errors. The only time even in the future that you should NOT worry about handling the newline case is on filesystems where it's is expressly forbidden, such as NTFS.
Most utilities that create files are encouraged to error on newline filenames, which makes this effective illegality stronger. The post also discusses the future of this encouragement, which is turning it into a requirement.
> However, older systems will not be enforcing this
Eventually, newlines in filenames will go the way of /usr/xpg4/bin/sh.
I'd like to note that up until this point, there hasn't (and isn't) been a fully POSIX compliant way to do many shell operations on newline containing filenames. They are already effectively unsupported, and the standard that adds support also discourages them from being created and used. The best way to handle them up until this point has been to not use sh(1).
Linux isn't POSIX compliant, and as far as I know has no plans to ban newlines in filenames, or even add an option to disable newlines.
In past, there have been Linux-based operating systems that have been certified as Single Unix Specification compliant, and part of said specification is POSIX. I would imagine GNU and Busybox and Musl will be willing to implement the changes proposed by POSIX 2024, which inevitably leads down the road of newlines being banned.
Howw would that work? Checking strings passed to open and rejecting them? Would we then have undeletable files, as we can't refer to their filenames?
I know Linux allows newlines in filenames, but every time I hear it I want to drink.
I agree with this.
If they want something that is easy to use in a non-scriptable way, maybe they should replicate Norton Commander instead.
Look into far2l
If you like that philosophy check out nushell. They go pretty hard core on that and they can because of structured output
Tbh, i dont understand why people want to rewrite ls of all things.
Like don't get me wrong, if they had fun, that's great.
But all i use ls for is getting a list of files. I barely ever even use the -la options. There just doesn't seem like a lot of room for improvement in something so simple.
Hi, author of `pls`[1] here. I started `pls` as a hobby project to scratch a personal itch: a "prettier" alternative to `ls`, with more colors and customisable icons. I also wanted to learn Rust as a secondary motivation.
But as I added more and more features to it, it has become a good tool that does a number of things that `ls` doesn't do (unless you chain it with other tools like `sort` or `grep`) and even other `ls` replacements don't do.
So even though `ls` is fantastic as-is, it's always fun to build something of your own, add a little more polish in areas that matter to you and put it out there to see if it resonates with more people.
[1]: https://pls.cli.rs
I think the standard ls doesn't have much in terms of color/icons, so its simplicity probably makes it a great side project for improving on.
Not a big surface area, some easy improvements. A whole lot less stressful than rewriting grep (although I'm massively grateful Burnt Sushi did such a crazy thing)
Thanks @benrutter! You nailed it - ls is like the "Hello World" of system tools. Simple enough that you won't tear your hair out, but meaty enough to learn a ton. Started with "ooh, pretty colors!" and before I knew it I was deep in filesystem APIs and terminal wizardry. Way less scary than tackling grep. Sometimes the best projects are the ones where you can't mess up too badly... well, unless you accidentally delete everything while testing
Well, recursive display is nice, I guess, as well as searching on partial filenames
Has been roughly doing the job since the 70s (?):
> I barely ever even use the -la options.
Certainly I use these less than plain "ls," but digging through hidden files and folders and looking at timestamps is very important for me.
That's the first thing I noticed in the options, it has modified date but not create or access date (listing or sorting) that I could tell. Of course it could be added, or I could just use `ls`.
I use ls -la via the ll alias exclusively. I find it far more readable to my eyes than plain ls.
Hidden files are almost always of interest to me since my job involves configuring servers.
https://github.com/c-blake/lc shows all files, including hidden files (starting with dot aka dot files) by default, suppressible in output with -xdot or a shell/internal alias to the same effect.
It helps to start with a more extensible/less built-in idea of "file type". "odd permissions" are another type that might interest someone, for example, such as "setgid but not group-executable" or "writable but not readable" or etc.
Yes, I know one can also use `find` or etc. for that, but there's no crime in there being >1 way to see things and, for some people, colors can make things really stand out - as can sort order which is another more color-blind possibility in `lc` as well as the simple filter-or-not of ls -a/-A.
Take a look at lc (but not the terminal screenshots! ;)): https://github.com/c-blake/lc
lc is a highly configurable "multi-dimensional"[1] file lister written in Nim focused on flexibility and configurability.
Key features:
- Multi-level sorting by combinations of attributes like size, time, and file type, with user-defined precedence
- Configurable file kind sorting order
- Value-dependent coloring for file attributes such as timestamps, permissions, or sizes.
- Abbreviations: Automatically shorten filenames, user/group names or symlink targets.
- File type classification: Integrates libmagic for file type inspection.
- Hyperlink support
- Per-directory configs: custom behaviors for specific directories using local tweak files (.lc).
- Lightweight (~900 lines of code) with only author's CLI library "cligen" and Nim's stdlib as dependencies.
and more.
[1]: https://github.com/c-blake/lc#vector-typemulti-dimensionalit...
It's a rite of passage. I had some colorful 'dir' alternatives on MS-DOS 5 and eventually made my own with Turbo Pascal. Easy & fun afternoon project
Thanks for the great list! Yep, eza and g are fantastic - I actually use eza daily and love how g handles git integration. What made me excited to experiment with lla was playing with the plugin architecture. While these other tools have great built-in features, I wanted to see if I could make something where the community could easily add their own capabilities without touching the core code. Kind of like how vim and neovim handle plugins. Got inspired by how people keep building these ls alternatives to scratch their own unique itches. Figured why not make it easier for everyone to scratch their own itch through plugins? Still very much an experiment, but it's been fun seeing what's possible!
Eza is great. I was pleasantly surprised at how nice the mime type icons meshed with the terminal.
I wanted to plug `pls`[1], a tool that I wrote and maintain. It does a few things that `eza` (another great tool nonetheless, and a massive inspiration) cannot do[2].
[1]: https://pls.cli.rs [2]: https://pls.cli.rs/about/comparison/
Also “walk” is great for interactive navigation.
- https://github.com/antonmedv/walk
lc: https://github.com/c-blake/lc (in Nim).
I also used eza to replace the tree command with the --tree flag.
I have these aliases for various purposes:
# Different options to search for files
# da=36 cyan timestamps
alias ls="EZA_COLORS='da=36' eza --time-style=relative --color-scale=age"
alias lsa="ls --almost-all" # ignore . ..
alias l="ls --long --classify=always" # show file indicators
alias la="l --almost-all"
# Tree view
alias ltreea="ls --tree"
alias ltree="ltreea --level=2"
# Sort by time or size
alias lt="ls --long --sort=time"
alias lta="lt --almost-all"
# lsd is faster than eza
alias lss="lsd --long --total-size --sort=size --reverse"
alias lssa="lss --almost-all"
lla seems to go beyond what ls should do for some reason. Why show git and code complexity info? Just use tools dedicated for these things, otherwise, it will be an unmaintainable mess. If you can solve a problem easily with external tools, then there's no reason to add a feature for it.
That's a great list. I have a similar list and the aliases grow out of frequently used arguments. For example, I found myself often doing an ls -Altch and so lsth was born. I find that aliases that or born of frequently used arguments are easily remembered. Over time that one grew to include a pipe to head because most of the time I just want to see the top 20 or so most recently modified files in the directory.
Creating command-line utilities is nice, but I personally lament the lack of man pages when people write something new.
That's the amazing part I'm talking about the learning experience you get from weeks of working on something like that is better than reading countless documentations
Oh, of course the development is fun and exciting and a learning experience.
But before inviting others to use something, please think of how to make its use more clear. After all, I assume you post this so that people use it, not only admire your coding skills. There is a group of people who have learned to read and rely on man pages.
For example, the top-level README says:
> -s, --sort <CRITERIA>: Sort by "name", "size", or "date"
OK, does "date" refer to creation date, modification date, access date? I can understand "size", but does it produce smallest-first or largest-first? It might not matter if... ah, no, there is no -r/--reverse flag. Can I have more than one "criteria" (since the plural is used)?
Getting answers for such questions now means I have to go read the code in src/args.rs and follow to the implementation of the various functions. And in a few days, when I have the same questions again and I have forgotten the options, I will again have to dive into the code.
Please consider providing a short man page. It documents the "calling interface" to your program and makes it easier to use. I usually start writing one even before implementing the whole thing, to clearly articulate what I expect the program to do.
Fair critique about the documentation - this needs proper attention. Writing a man page first is a solid approach - it forces clear thinking about the interface before implementation. I'll prioritize adding complete documentation for all options and the plugin system. The code works, but without good docs it's not truly useful.
While a man page or good documentation is maybe not too intriguing for you I consider it essential for other users to adopt. Maybe there are new or modern ways to create man pages that can be stimulating for your learning experience?
[dead]
I know its only for personal use, but I've never had any problems with ls not being "high-performance" enough...
brew support?
Great idea! I will be working on it!
The things I take for granted. This is a breath of fresh air! Way to rethink the fundamentals!
I can't tell if you're being sarcastic or not.
For the record I was not being sarcastic but maybe I was feeling a bit too romantic or overly supportive of OP
I notice prior HN comments of yours mention the physical design of the NeXT cube. I cannot say it will make you not hate software, but you still might appreciate that another alternative ls, https://github.com/c-blake/lc, both re-thinks/breaks more radically with ls-tradition and adapts well to something very similar to a terminal variant of the https://en.wikipedia.org/wiki/Miller_columns used in the NeXT file tree graphical browser/navigator via simple shell process substitution composition. E.g., a 3-level scenario on an 80-column looks like:
Some shell script that uses $((COLUMNS)) arithmetic to do 2 or 4 or whatever terminal width is a pretty simple exercise for the reader and one might want to pipe to less.