> e.g. This pattern ([0-9][0-9]?[0-9]][.])+ matches one, two or three digits followed by a . and also matches repeated patterns of this. This wold match an IP address (albeit not strictly).
I love regular expressions but one thing I've learned over the years is the syntax is dense enough that even people who are confident enough to start writing regex tutorials often can't write a regex that matches an IP address.
It's especially ironic given that the title of the post is "Regex Isn't Hard", and then it proceeds to make several (syntactical and logical) errors in the one real-world example.
Syntax error aside (there's an extra ] floating around), it's not even close to correct -- it'll match "999.999.999.000.999." among other things, will never match just one digit (there's a missing ?), and always insists on the trailing dot.
In practice, the first unpaired ] is treated as an ordinary character (at least according to https://regex101.com/) - which does nothing to make this regex fit for its intended purpose. I'm not sure whether this is according to spec. (I think it is, though that does not really matter compared to what the implementations actually do.)
Characters which are sometimes special, depending on context, are one more thing making regexes harder than they appear at first sight.
The author's willingness to publish code without even minimal testing does not inspire confidence.
Calling the extra ] a syntax error was a slight exaggeration on my behalf, but that was clearly an unintended extra character -- there's no way the author thinks "123].45].67].89]" is a valid IP address. But yes, it does compile and is interpreted as a valid regex, albeit not a useful one in this context.
The out-of-range values are not ideal but can be fixed with post-validation in code (which is cleaner than writing unnecessarily complicated regex, anyways). The missing ? leads to a bunch of false negatives, and the trailing . causes even more problems.
… but without all the nice white space and comments, unless you’re willing to discuss regex engines that let you do multi-line/commented literals like that… I think ruby does, not sure what other languages.
The problem is that expressing “an integer from 0-255” is surprisingly complicated for regex engines to express. And that’s not even accounting for IP addresses that don’t use dots (which is legal as an argument to most software that connects to an IP address), as other commenters have pointed out.
Regex can be good but you need to be willing to bail out when it’s not appropriate.
For something like locating IP addresses in text, using a regex to identify candidates is a great idea. But as you show, you don’t want to implement the full validation in it. Use regex to find dotted digit groups, but validate the actual numeric values as a separate step afterwards.
> I think ruby does, not sure what other languages.
You're right that Ruby has it. Perl also has /x, of course (since most of Ruby regex was "inspired" directly by Perl's syntax), as well as Python (re.VERBOSE). Otherwise, yeah, it's disappointingly rare.
Well, it depends on how specific you want to be. You could do `.*`, and this will match an IP address, or you can be as specific as trying to specify number ranges digit by digit, which is so complicated that it doesn't merit a "can't even".
Also, `16843009` is an IP address, try pinging it.
When the article starts with an AI generated image that adds nothing to the explanation, it tends to make me suspicious if the article itself was written by an AI as well...
Is it because everyone tries to make it look short?
edit: asking partly, because in my current work I occassionally have to convince non-technical users to use one type of entry over other. For that reason, easy to read, simple regex wins over fancy, but convoluted regex.
> For that reason, easy to read, simple regex wins over fancy, but convoluted regex.
Sure, I'd take \d+\.\d+\.\d+\.\d+ over... "((2(5[0-5]|[0-4][0-9])|1[0-9]{2}|[1-9]?[0-9])\.){3}(2(5[0-5]|[0-4][0-9])|1[0-9]{2}|[1-9]?[0-9])", assuming that I then validate the results afterwards.
"matches an ip address" is a vague enough specification that of course people fail.
Is it what `inet_addr` accept? In that case, "1", "0x1", "00.01", "00000.01", and more are all ip addresses. `ping` accepts all of em anyway.
Is a valid ipv6 address one with the square brackets around it? Is "::1" a valid ip address? What about "fe80::1%eth2"? ping accepts both of these on my machine (though probably not on yours, since you probably don't have an eth2 interface)
square brackets around an IP address predates IPv6, it was/is? used to bypass DNS lookups and some (very) old programs required IP addresses inside [...] otherwise they were assumed to be a domain name with all the rules that implied.
^^^ this ^^^ I can’t understand my own regexes after a couple weeks - much less the ones I got the AI to write for me because I’m lazy or time constrained.
Someone else then asks the absolute razor of a question: 'What value does this add over just verifying that the input is of the form {something}@{something}.{something}?'
So my brother doesn't code for a living, but has done a fair amount of personal coding, and also gotten into the habit of watching live-coding sessions on YouTube. Recently he's gotten involved in my project a bit, and so we've done some pair programming sessions, in part to get him up to speed on the codebase, in part to get him up to speed on more industrial-grade coding practices and workflows.
At some point we needed to do some parsing of some strings, and I suggested a simple regex. But apparently a bunch of the streamers he's been watching basically have this attitude that regexes stink, and you should use basically anything else. So we had a conversation, and compared the clarity of coding up the relatively simple regex I'd made, with how you'd have to do it procedurally; I think the regex was a clear winner.
Obviously regexes aren't the right tool for every job, and they can certainly be done poorly; but in the right place at the right time they're the simplest, most robust, easiest to understand solution to the problem.
My problem is that regexes are write-only, unreadable once written (to me anyway). And sometimes they do more than you intended. You maybe tested on a few inputs and declared it fit for purpose, but there might be more inputs upon which it has unintended effects. I don't mind simple, straight-forward regexes. But when they become more complex, I tend to prefer to write out the procedural code, even if it is (much) longer in terms of lines. I find that generally I can read code better than regexes, and that code I write is more predictable than regexes I write.
> I tend to prefer to write out the procedural code, even if it is (much) longer in terms of lines.
This might work for you, but in general the amount of bugs is proportional to the amount of code. The regex engine is alredy throughly tested by someone else while a custom implementation in procedural code will probably have bugs and be a lot more work to maintain if the pattern changes.
Surely complexity is a factor? A procedual implementation will necessarily have the same essential complexity as the regex it replaces, but then it will additionally have a bunch of incidental complexity in matching and looping and backtracking.
Regexes can certainly be hard to read - the solution is to use formatting and comments to make them easier to understand - not to drown the logic in reams of boilerplate code.
> unreadable once written (to me anyway). (…) there might be more inputs upon which it has unintended effects.
https://regex101.com can explain your regex back to you, and allows you to test it with more inputs.
Though I’m not trying to convince you to always use regular expressions, I agree with GP:
> Obviously regexes aren't the right tool for every job, and they can certainly be done poorly; but in the right place at the right time they're the simplest, most robust, easiest to understand solution to the problem.
but they’re not an excuse to avoid regex. Similarly git has many warts but there’s no getting around it.
Same with CSS
If you want to run with the herd though you need to know these things, even enjoy them.
You can rely on tooling and training wheels like Python VERBOSE but you’re never going to get away from the fact that the “rump” of the population works with them.
Easier to bite the bullet and get practised. I’ve no doubt you have the intellect - you only need be convinced it’s a good use of your time.
I don't incorporate a lot of regular expressions into my code. But where I do like them is for search and replace. So I do treat them as mostly disposable.
Confession: Regex knowledge is one of those things I've let completely atrophy after integrating LLMs into my workflow. I guess if the day comes that AI/ML models suddenly disappear, or become completely unavailable to me, I'll have to get into the nitty gritty of Regex again...but until that time, it is a "solved problem" for my part.
It's hilarious that the most reliable way to write a complex regex is to fire up billions of dollars of state of the art ML code and ask for what you want in English.
Yeah, this is my heaviest use case too. Mostly because it generally does save me a bit of time and is easily verifiable with tools like rubular and then can tweak what is needed once 90% there.
> Instead, use a range negation, like [^%] if you know the % character won’t show up. It doesn’t hurt to be a little more explicit.
This is absolutely horrible, pattern are fairly readable if they follow the syntax logic. Matching "everything but that random character that will not appear" is absurd.
Also the idea that a . (dot) behaves arbitrary in different languages shows a sever lack up understanding about regex syntax. Ofc you can't write a proper pattern if you don't know which syntax is used. If anything you would force override the behavior of the . (dot) with the appropriate flag to ensure it works the same with different compatible regex engines.
Agreed, I wanted to write the whole article off after that suggestion. That is such a terrible anti pattern that would confuse everyone who looked at it, even people with decades of experience.
I’m a fan of regular expressions, though I understand why many people wince at the sight. You should avoid showing them to a non-programmer who is interested in learning to code, because they’ll immediately fear programming is intractable.
Even as much as I like regex, I wouldn’t recommend this post. One reason is the code style is too close to regular text:
> a matches a single character, always lowercase a.
That sentence uses “a” three times, two of them as code and once as an indefinite article, but it’s not immediately obvious to eye. VoiceOver completely fumbles it, especially considering the sentence immediately after.
A more important reason against recommending the article is that I find a bunch of the arguments to be unhelpful. If you’re trying to convince people to give regular expressions a chance, telling them to ignore `.` and use `[^%]` is going to bite them. That’s not super common (important when trying to learn more from other sources) and even an experienced regexer must do a double take to figure out “is there a reason this specific character must not be matched?” Furthermore, no new learner is going to remember that four character incantation, and neither are they going to understand what’s happening when their code doesn’t work because there was a `%` in their text. People need to learn about `.` (possibly the most common character in regex) if only because they also need to learn to escape it and not ignore it when there is a literal period in the text. Don’t tell people to ignore repetition ranges either, they aren’t difficult to reason about and are certainly simpler to read than the same blob of intractable text multiple times.
Regexes are powerful, useful and needlessly hard to use.
But not because of the regex idea itself.
It is quoting.
The reason people don't properly learn how to use a regex is because they are insulated from it by whatever language they are using.
It's literally like those surgeons who do heart surgery starting at a vein in your leg.
I use regexes all the time, in emacs, python, perl, bash, sed, awk, grep and more...
and just about every time the regex syntax is mixed with single quotes, double quotes, backslashes, $variable names and more from the "enclosing language or tool".
If I have a parenthesis or $, I'm always wondering if it is part of the enclosing language, or the matching pattern, or the literal. Also, the kind of regex adds to the confusion (basic or extended regex?)
I think it would be nice to have a syntax highlighter that would help with this, independent of language. green for variable or other language construct, red for regex pattern, white for matching literal.
Wait until somebody uses string templating to insert something that ends with a backslash, changing the meaning of following characters from what the syntax highlighting thinks; a curse be upon that person.
Escaping/quoting is such a mud pile everywhere because it's in-band communication, but nobody would tolerate all out-of-band because it's too tedious. At least newer languages are getting better with things like 'raw' strings or Rust's arbitrarily long delimeters, but I'd still like more control.
I'm surprised I never see languages adopt directed delimeters like {my string} or something, since it lets you avoid escaping in the very common case of balanced internal delimeters.
Regex is much easier if you don't do it all at once. It's perfectly acceptable to, say, trim all the leading spaces, store the result in a temp variable, trim all the trailing spaces, store the result in a temp variable, remove all the hyphens. etc. etc.
Everyone tries to create the platonic ideal regex that does everything in one line.
Ok. This sounds like an interesting detour. Can you elaborate on that one? I doubt I will ever use that knowledge, but it sounds like it is worth knowing anyway.
The author says “any lowercase character” but they mean “any character between the character ‘a’ and the character ‘z’”, which happens to correspond to the lower case letters in English but doesn’t include ü, õ, ø, etc.
The author says “any lowercase character” but they mean “any character between the character ‘a’ and the character ‘z’”, which happens to correspond to the lower case letters in English but doesn’t include ü, õ, ø, etc.
> but they mean “any character between the character ‘a’ and the character ‘z’”, which happens to correspond to the lower case letters in English
‘Only’ in the most commonly used character encodings. In EBCDIC (https://en.wikipedia.org/wiki/EBCDIC), the [a-z] range includes more than 26 characters.
If you take the regex subset that works uniformly across all regex engines (even for just perl-compatible engines), you would probably get nothing done. They all have some minor variations that make it impossible to write a regex for a particular engine without a reference sheet open nearby, even if you have years of experience writing them. And those 'shortcuts' like look-ahead and look-behind are often too useful to be neglected completely.
Crafting regexes is story of its own. The other commentor has described it. Just to summarize, regexes are fine for simple patterns. But their complexity explode as soon as you need to handle a lot of corner cases.
In a previous job I've done some stupid tricks with regexes. Inside a MongoDB database I had documents with a version field in string form ("x.y.z") and I needed to exclude documents with a schema too old to process in my queries.
One can construct a regex that matches a number between x and y by enumerating all the digit patterns that fit the criteria. For example, the following pattern matches a number between 1 and 255: ^([1-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])$
This can be extended to match a version less than or equal to x.z.y by enumerating all the patterns across the different fields. The following pattern matches any version less than or equal to 2.14.0: ^([0-1]\.\d+\.\d+)|(2\.[0-9]\.\d+|(2\.1[0-3]\.\d+))$
Basically, I wrote a Java method that would generate a regex with all the patterns to match a version greater than or equal to a lower bound, which was then fed to MongoDB queries to exclude documents too old to process based on the version field. It was a stupid solution to a dumb problem, but it worked flawlessly.
I consider myself a reasonably competent senior engineer, and yet with regex this is what I have noticed:
Every time I need to write even the simplest regex, I can't seem to get it right the first time. I always need to struggle with it for a long time. Sometimes even using online tools takes me time to get it right. This happens every.single.time.
It baffles me to no end. I'm a pretty quick learner of pretty much everything I get into. I write the most sophisticated Typescript code you can imagine; I've written a small toy language; I've written biometric authentication drivers; I've written my own functional UI lib. But, I cannot master regex.
You can give me all the arguments about what is good about regex, but in my experience (which you can't argue with), it is a VERY badly designed API, and nothing will convince me otherwise. Regex is probably the worst thing ever in programming.
One can think of regex as very compact notation for writing text operations. It helps a lot.
The popular idea of them being write-only is obviously a joke, but it has some truth to it. On the good side, small code that needs to be rewritten is often better than large code that needs to be maintained.
For me, the main problem of the Regex syntax is the escaping rules: Many characters require escaping: \ { } ( ) [ ] | * + ? ^ $ . And the rules are different inside square brackets. I think it would be better if literal text is enclosed in quotes; that way, much less escaping is needed, but it would still be concise (and sometimes, more concise). I tried to formulate a proposal here: https://github.com/thomasmueller/bau-lang/blob/main/RegexV2....
Doesn't this go against the "literals are enclosed in quotes" idea? In this case, you have a special character (`-`) inside a quoted string. IMO this would be more consistent: `['0'-'9''a'-'f'']`, maybe even have comma separation like `['0'-'9','a'-'f'']`. This would also allow you to include the character classes like `[d,'a'-'f'']` although that might be a little confusing if you're used to normal regex.
Thanks for reading and taking the time to respond!
> Doesn't this go against the "literals are enclosed in quotes" idea?
Sure, one could argue that other changes would also be useful, but then it would be less concise. I think the main reasons why people like regex are: (a) powerful, (b) concise.
For my V2 proposal, the new rule is: "literals are enclosed in quotes", the rule isn't "_only_ literals are enclosed in quotes" :-) In this case, I think `-` can be quoted as well. I wanted to keep the v2 syntax as close as possible to the existing syntax.
I tend to use regular expressions more commonly on the command line (looking for content in files, especially log files) than I do in code. But, that being said, I do use them in both cases. They're a tool and can be used well. But, like any other programming, you need to make sure your code is readable. Which (generally) means avoiding any really complex regular expressions.
This pattern ([0-9][0-9]?[0-9]][.])+ matches one, two or three digits followed by a . and also matches repeated patterns of this. This wold match an IP address (albeit not strictly).
that pattern (once you fixed the typo) would not match a whole ip address unless you allowed it to also swallow the character after the last octet, which wouldn't work at, say, end of line
My issue with regexes is that the formal definition of regex I learned at university is clear and simple [0] but then using them in programming languages is always a mess
I mean sure, if it was my full-time job to write regexes I’d probably get pretty good at it. But instead a really complex one comes up maybe once a year for me and so I have to go to some online regex checker and start iteratively building one up, spending hours only find some condition where it doesn’t work and back to the checker...
So I don’t think it’s easy, but I do agree that they are very useful.
This is both a demo for the beauty and power of regexes, and of their dangers:
* The use of backslash separatores quickly makes a mess, as they tend to need escaping wherever regexes are usefull.
* The uppercase/lowercase is only right if there are no accented characters, so USA. This is bad in western europe in files where they are rare: Your program works for a while, then an accent sneaks in and breaks things.
* The exact meaning of all the specials like \( vs ( .
* Ranges work in most regex dialects but not everywhere.
* A simple regex for an int with a specific range is nasty. If you want a full float, good luck.
Regexes are great as initial filter or quick hack, but you need more in full size programs.
Honestly regex syntax is a mess. For example parentheses are used both for grouping alternatives and for capturing. I think Perl 6 tried (and failed) to fix this. Larger problem is you have to memorize the meta characters since they are basically random.
Regex is still the best solution I know of for its intended domain.
This is truly one thing AI solved. Hard to write, easy to test. No one needs to learn this convoluted syntax in the future and we're all better for it.
I wonder if the problems people are pointing out with the examples (lowercase not being correct under various locales, IP address regex not being conformant etc) would be absent in code furnished by LLMs.
Covering all cases? How would that be possible? Even if we only consider ASCII strings, there are 16.000 possible two-character strings, 2 million possible three-character strings and so on.
I like the sentiment but I would make some very different choices. For instance, use the . operator, because it is easier to understand than his Rube-Goldberg-logic negation groups alternative.
He’s also strangely worried about portability. If you are really concerned about portability, you are moving between languages and you probably aren’t some novice who should be frightened by complexity.
I don’t think about portability at all, ever. And I do maintain code in Perl, Python, and Javascript.
But yeah, just as in all programming languages, you can get by with knowing about a 20% subset of all it can do.
> e.g. This pattern ([0-9][0-9]?[0-9]][.])+ matches one, two or three digits followed by a . and also matches repeated patterns of this. This wold match an IP address (albeit not strictly).
I love regular expressions but one thing I've learned over the years is the syntax is dense enough that even people who are confident enough to start writing regex tutorials often can't write a regex that matches an IP address.
It's especially ironic given that the title of the post is "Regex Isn't Hard", and then it proceeds to make several (syntactical and logical) errors in the one real-world example.
Syntax error aside (there's an extra ] floating around), it's not even close to correct -- it'll match "999.999.999.000.999." among other things, will never match just one digit (there's a missing ?), and always insists on the trailing dot.
In practice, the first unpaired ] is treated as an ordinary character (at least according to https://regex101.com/) - which does nothing to make this regex fit for its intended purpose. I'm not sure whether this is according to spec. (I think it is, though that does not really matter compared to what the implementations actually do.)
Characters which are sometimes special, depending on context, are one more thing making regexes harder than they appear at first sight.
The author's willingness to publish code without even minimal testing does not inspire confidence.
Agreed entirely, on all those points.
Calling the extra ] a syntax error was a slight exaggeration on my behalf, but that was clearly an unintended extra character -- there's no way the author thinks "123].45].67].89]" is a valid IP address. But yes, it does compile and is interpreted as a valid regex, albeit not a useful one in this context.
The out-of-range values are not ideal but can be fixed with post-validation in code (which is cleaner than writing unnecessarily complicated regex, anyways). The missing ? leads to a bunch of false negatives, and the trailing . causes even more problems.
Correct - it'll accept "999.999.999.000.999." but it'll reject "127.0.0.1"
Writing one correctly is pretty complicated task if you’re trying to write a simple tutorial… off the top of my head, you’d need:
… but without all the nice white space and comments, unless you’re willing to discuss regex engines that let you do multi-line/commented literals like that… I think ruby does, not sure what other languages.The problem is that expressing “an integer from 0-255” is surprisingly complicated for regex engines to express. And that’s not even accounting for IP addresses that don’t use dots (which is legal as an argument to most software that connects to an IP address), as other commenters have pointed out.
Regex can be good but you need to be willing to bail out when it’s not appropriate.
For something like locating IP addresses in text, using a regex to identify candidates is a great idea. But as you show, you don’t want to implement the full validation in it. Use regex to find dotted digit groups, but validate the actual numeric values as a separate step afterwards.
> I think ruby does, not sure what other languages.
You're right that Ruby has it. Perl also has /x, of course (since most of Ruby regex was "inspired" directly by Perl's syntax), as well as Python (re.VERBOSE). Otherwise, yeah, it's disappointingly rare.
.net also supports verbose regex.
Shameless plug: My Regex engine (https://pkg.go.dev/gitea.twomorecents.org/Rockingcool/kleing...) has dedicated syntax for this kind of task.
will only match full IPv4 addresses, but is a lot stricter than the one in the article.EDIT: formatting
Well, it depends on how specific you want to be. You could do `.*`, and this will match an IP address, or you can be as specific as trying to specify number ranges digit by digit, which is so complicated that it doesn't merit a "can't even".
Also, `16843009` is an IP address, try pinging it.
When the article starts with an AI generated image that adds nothing to the explanation, it tends to make me suspicious if the article itself was written by an AI as well...
Is it because everyone tries to make it look short?
edit: asking partly, because in my current work I occassionally have to convince non-technical users to use one type of entry over other. For that reason, easy to read, simple regex wins over fancy, but convoluted regex.
> For that reason, easy to read, simple regex wins over fancy, but convoluted regex.
Sure, I'd take \d+\.\d+\.\d+\.\d+ over... "((2(5[0-5]|[0-4][0-9])|1[0-9]{2}|[1-9]?[0-9])\.){3}(2(5[0-5]|[0-4][0-9])|1[0-9]{2}|[1-9]?[0-9])", assuming that I then validate the results afterwards.
"matches an ip address" is a vague enough specification that of course people fail.
Is it what `inet_addr` accept? In that case, "1", "0x1", "00.01", "00000.01", and more are all ip addresses. `ping` accepts all of em anyway.
Is a valid ipv6 address one with the square brackets around it? Is "::1" a valid ip address? What about "fe80::1%eth2"? ping accepts both of these on my machine (though probably not on yours, since you probably don't have an eth2 interface)
square brackets around an IP address predates IPv6, it was/is? used to bypass DNS lookups and some (very) old programs required IP addresses inside [...] otherwise they were assumed to be a domain name with all the rules that implied.
^^^ this ^^^ I can’t understand my own regexes after a couple weeks - much less the ones I got the AI to write for me because I’m lazy or time constrained.
I haven't verified it but quick googling for a regex to validate all legal email addresses pointed me to https://stackoverflow.com/questions/201323/how-can-i-validat..., where one commenter posits that regex to be:
(?:(?:\r\n)?[ \t])(?:(?:(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t] )+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))"(?:(?: \r\n)?[ \t]))(?:\.(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:( ?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t])))@(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\\".\[\] \000-\0 31]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)\ ](?:(?:\r\n)?[ \t]))(?:\.(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\\".\[\] \000-\031]+ (?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)\](?: (?:\r\n)?[ \t])))|(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z |(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n) ?[ \t]))\<(?:(?:\r\n)?[ \t])(?:@(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\ r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)\](?:(?:\r\n)?[ \t]))(?:\.(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n) ?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)\](?:(?:\r\n)?[ \t] )))(?:,@(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)\](?:(?:\r\n)?[ \t])* )(?:\.(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t] )+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)\](?:(?:\r\n)?[ \t])))) :(?:(?:\r\n)?[ \t]))?(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+ |\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r \n)?[ \t]))(?:\.(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?: \r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t ]))"(?:(?:\r\n)?[ \t])))@(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\\".\[\] \000-\031 ]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)\]( ?:(?:\r\n)?[ \t]))(?:\.(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\\".\[\] \000-\031]+(? :(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)\](?:(? :\r\n)?[ \t])))\>(?:(?:\r\n)?[ \t]))|(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(? :(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)? [ \t]))"(?:(?:\r\n)?[ \t])):(?:(?:\r\n)?[ \t])(?:(?:(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]| \\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t]))(?:\.(?:(?:\r\n)?[ \t])(?:[^()<>
@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|" (?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t])))@(?:(?:\r\n)?[ \t] )(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\ ".\[\]]))|\[([^\[\]\r\\]|\\.)\](?:(?:\r\n)?[ \t]))(?:\.(?:(?:\r\n)?[ \t])(? :[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[ \]]))|\[([^\[\]\r\\]|\\.)\](?:(?:\r\n)?[ \t])))|(?:[^()<>@,;:\\".\[\] \000- \031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|( ?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t]))\<(?:(?:\r\n)?[ \t])(?:@(?:[^()<>@,; :\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([ ^\[\]\r\\]|\\.)\](?:(?:\r\n)?[ \t]))(?:\.(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\\" .\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\ ]\r\\]|\\.)\](?:(?:\r\n)?[ \t])))(?:,@(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\\".\ [\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\ r\\]|\\.)\](?:(?:\r\n)?[ \t]))(?:\.(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\] |\\.)\](?:(?:\r\n)?[ \t])))):(?:(?:\r\n)?[ \t]))?(?:[^()<>@,;:\\".\[\] \0 00-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\ .|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t]))(?:\.(?:(?:\r\n)?[ \t])(?:[^()<>@, ;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(? :[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t])))@(?:(?:\r\n)?[ \t])* (?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\". \[\]]))|\[([^\[\]\r\\]|\\.)\](?:(?:\r\n)?[ \t]))(?:\.(?:(?:\r\n)?[ \t])(?:[ ^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\] ]))|\[([^\[\]\r\\]|\\.)\](?:(?:\r\n)?[ \t])))\>(?:(?:\r\n)?[ \t]))(?:,\s( ?:(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\ ".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t]))(?:\.(?:( ?:\r\n)?[ \t])(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[ \["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t ])))@(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t ])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)\](?:(?:\r\n)?[ \t]))(? :\.(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+| \Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)\](?:(?:\r\n)?[ \t])))|(?: [^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\ ]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t]))\<(?:(?:\r\n) ?[ \t])(?:@(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\[" ()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)\](?:(?:\r\n)?[ \t]))(?:\.(?:(?:\r\n) ?[ \t])(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>
@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)\](?:(?:\r\n)?[ \t])))(?:,@(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@, ;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)\](?:(?:\r\n)?[ \t]))(?:\.(?:(?:\r\n)?[ \t] )(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\ ".\[\]]))|\[([^\[\]\r\\]|\\.)\](?:(?:\r\n)?[ \t])))):(?:(?:\r\n)?[ \t]))? (?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\". \[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t]))(?:\.(?:(?: \r\n)?[ \t])(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z|(?=[\[ "()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))"(?:(?:\r\n)?[ \t]) ))@(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t]) +|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)\](?:(?:\r\n)?[ \t]))(?:\ .(?:(?:\r\n)?[ \t])(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z |(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)\](?:(?:\r\n)?[ \t])))\>(?:( ?:\r\n)?[ \t]))))?;\s)
Someone else then asks the absolute razor of a question: 'What value does this add over just verifying that the input is of the form {something}@{something}.{something}?'
> 'What value does this add over just verifying that the input is of the form {something}@{something}.{something}?'
Depends if {something} can contain periods for my email.
name@antispam.mydomain.com
So my brother doesn't code for a living, but has done a fair amount of personal coding, and also gotten into the habit of watching live-coding sessions on YouTube. Recently he's gotten involved in my project a bit, and so we've done some pair programming sessions, in part to get him up to speed on the codebase, in part to get him up to speed on more industrial-grade coding practices and workflows.
At some point we needed to do some parsing of some strings, and I suggested a simple regex. But apparently a bunch of the streamers he's been watching basically have this attitude that regexes stink, and you should use basically anything else. So we had a conversation, and compared the clarity of coding up the relatively simple regex I'd made, with how you'd have to do it procedurally; I think the regex was a clear winner.
Obviously regexes aren't the right tool for every job, and they can certainly be done poorly; but in the right place at the right time they're the simplest, most robust, easiest to understand solution to the problem.
My problem is that regexes are write-only, unreadable once written (to me anyway). And sometimes they do more than you intended. You maybe tested on a few inputs and declared it fit for purpose, but there might be more inputs upon which it has unintended effects. I don't mind simple, straight-forward regexes. But when they become more complex, I tend to prefer to write out the procedural code, even if it is (much) longer in terms of lines. I find that generally I can read code better than regexes, and that code I write is more predictable than regexes I write.
> I tend to prefer to write out the procedural code, even if it is (much) longer in terms of lines.
This might work for you, but in general the amount of bugs is proportional to the amount of code. The regex engine is alredy throughly tested by someone else while a custom implementation in procedural code will probably have bugs and be a lot more work to maintain if the pattern changes.
> This might work for you, but in general the amount of bugs is proportional to the amount of code.
If you wanted to look for cases which serve as an exception to this rule, code relying on regexes would be an excellent place to start.
In general, the correctness of the code is proportional to its readability.
I also prefer procedural code instead of regexes.
Surely complexity is a factor? A procedual implementation will necessarily have the same essential complexity as the regex it replaces, but then it will additionally have a bunch of incidental complexity in matching and looping and backtracking.
Regexes can certainly be hard to read - the solution is to use formatting and comments to make them easier to understand - not to drown the logic in reams of boilerplate code.
> unreadable once written (to me anyway). (…) there might be more inputs upon which it has unintended effects.
https://regex101.com can explain your regex back to you, and allows you to test it with more inputs.
Though I’m not trying to convince you to always use regular expressions, I agree with GP:
> Obviously regexes aren't the right tool for every job, and they can certainly be done poorly; but in the right place at the right time they're the simplest, most robust, easiest to understand solution to the problem.
What makes them unreadable to you ? 99% of the time you can just read them character by character with maybe some groups and back references
I don’t think this is a particularly useful question. If they could accurately describe what exactly is confusing they wouldn’t be confused.
These are all valid criticisms of regex
but they’re not an excuse to avoid regex. Similarly git has many warts but there’s no getting around it. Same with CSS
If you want to run with the herd though you need to know these things, even enjoy them.
You can rely on tooling and training wheels like Python VERBOSE but you’re never going to get away from the fact that the “rump” of the population works with them.
Easier to bite the bullet and get practised. I’ve no doubt you have the intellect - you only need be convinced it’s a good use of your time.
Kind of fair.
I don't incorporate a lot of regular expressions into my code. But where I do like them is for search and replace. So I do treat them as mostly disposable.
You know you can write comments in your code where the regexp is, right?
Confession: Regex knowledge is one of those things I've let completely atrophy after integrating LLMs into my workflow. I guess if the day comes that AI/ML models suddenly disappear, or become completely unavailable to me, I'll have to get into the nitty gritty of Regex again...but until that time, it is a "solved problem" for my part.
It's hilarious that the most reliable way to write a complex regex is to fire up billions of dollars of state of the art ML code and ask for what you want in English.
IMO it’s a “language” you need to understand in order to use.
Just like you wouldn’t copy/paste any random snippet into your source code if you don’t understand exactly what it does.
I see a lot of broken regex at work from people who use regular expressions but don’t understand them (for various reasons).
It used to come with a “found this on stackoverflow”-excuse, but mostly now it’s “AI told me to use this” instead.
yeah programmers famously understands all the random boilerplate incantations they copy past in their code to get things going.
totally definitively
I know some people consider this fine. I do not. The fact that the world is not ideal does not mean that we cannot continue to improve things.
We all have our own ideas of Utopia I guess :)
Yeah, this is my heaviest use case too. Mostly because it generally does save me a bit of time and is easily verifiable with tools like rubular and then can tweak what is needed once 90% there.
> Instead, use a range negation, like [^%] if you know the % character won’t show up. It doesn’t hurt to be a little more explicit.
This is absolutely horrible, pattern are fairly readable if they follow the syntax logic. Matching "everything but that random character that will not appear" is absurd. Also the idea that a . (dot) behaves arbitrary in different languages shows a sever lack up understanding about regex syntax. Ofc you can't write a proper pattern if you don't know which syntax is used. If anything you would force override the behavior of the . (dot) with the appropriate flag to ensure it works the same with different compatible regex engines.
Agreed, I wanted to write the whole article off after that suggestion. That is such a terrible anti pattern that would confuse everyone who looked at it, even people with decades of experience.
I’m a fan of regular expressions, though I understand why many people wince at the sight. You should avoid showing them to a non-programmer who is interested in learning to code, because they’ll immediately fear programming is intractable.
Even as much as I like regex, I wouldn’t recommend this post. One reason is the code style is too close to regular text:
> a matches a single character, always lowercase a.
That sentence uses “a” three times, two of them as code and once as an indefinite article, but it’s not immediately obvious to eye. VoiceOver completely fumbles it, especially considering the sentence immediately after.
A more important reason against recommending the article is that I find a bunch of the arguments to be unhelpful. If you’re trying to convince people to give regular expressions a chance, telling them to ignore `.` and use `[^%]` is going to bite them. That’s not super common (important when trying to learn more from other sources) and even an experienced regexer must do a double take to figure out “is there a reason this specific character must not be matched?” Furthermore, no new learner is going to remember that four character incantation, and neither are they going to understand what’s happening when their code doesn’t work because there was a `%` in their text. People need to learn about `.` (possibly the most common character in regex) if only because they also need to learn to escape it and not ignore it when there is a literal period in the text. Don’t tell people to ignore repetition ranges either, they aren’t difficult to reason about and are certainly simpler to read than the same blob of intractable text multiple times.
I've also seen people use `[\s\S]` to match all characters when they couldn't use `.`.
This is a common approach when the regex needs to match any character including newlines; `.` often doesn't.
I generally use `[^]`
Also you can use . with the dotAll /s
Regexes are powerful, useful and needlessly hard to use.
But not because of the regex idea itself.
It is quoting.
The reason people don't properly learn how to use a regex is because they are insulated from it by whatever language they are using.
It's literally like those surgeons who do heart surgery starting at a vein in your leg.
I use regexes all the time, in emacs, python, perl, bash, sed, awk, grep and more...
and just about every time the regex syntax is mixed with single quotes, double quotes, backslashes, $variable names and more from the "enclosing language or tool".
If I have a parenthesis or $, I'm always wondering if it is part of the enclosing language, or the matching pattern, or the literal. Also, the kind of regex adds to the confusion (basic or extended regex?)
I think it would be nice to have a syntax highlighter that would help with this, independent of language. green for variable or other language construct, red for regex pattern, white for matching literal.
Wait until somebody uses string templating to insert something that ends with a backslash, changing the meaning of following characters from what the syntax highlighting thinks; a curse be upon that person.
Escaping/quoting is such a mud pile everywhere because it's in-band communication, but nobody would tolerate all out-of-band because it's too tedious. At least newer languages are getting better with things like 'raw' strings or Rust's arbitrarily long delimeters, but I'd still like more control.
I'm surprised I never see languages adopt directed delimeters like {my string} or something, since it lets you avoid escaping in the very common case of balanced internal delimeters.
Regex is much easier if you don't do it all at once. It's perfectly acceptable to, say, trim all the leading spaces, store the result in a temp variable, trim all the trailing spaces, store the result in a temp variable, remove all the hyphens. etc. etc.
Everyone tries to create the platonic ideal regex that does everything in one line.
Found an error immediately "Any lowercase character" doesn't match all Swedish lowercase characters.
Ok. This sounds like an interesting detour. Can you elaborate on that one? I doubt I will ever use that knowledge, but it sounds like it is worth knowing anyway.
https://en.wikipedia.org/wiki/Swedish_alphabet
The author says “any lowercase character” but they mean “any character between the character ‘a’ and the character ‘z’”, which happens to correspond to the lower case letters in English but doesn’t include ü, õ, ø, etc.
lol really? Why not? Is that true for all encodings? Is it a bug or a feature? What about a simple character set like gsm-7 Swedish?
The author says “any lowercase character” but they mean “any character between the character ‘a’ and the character ‘z’”, which happens to correspond to the lower case letters in English but doesn’t include ü, õ, ø, etc.
> but they mean “any character between the character ‘a’ and the character ‘z’”, which happens to correspond to the lower case letters in English
‘Only’ in the most commonly used character encodings. In EBCDIC (https://en.wikipedia.org/wiki/EBCDIC), the [a-z] range includes more than 26 characters.
That’s one of the reasons POSIX has character classes (https://en.wikipedia.org/wiki/Regular_expression#Character_c...). [:lower:] always gets you the lowercase characters in the encoding that the program uses.
I would expect [a-z] to mean any lowercase in any language, not lowercase but only a to z. So I’d get bitten by that one.
The letters with diacritics sort lexicographically after 'z', so it does stand to reason they wouldn't appear in that range.
The Swedish alphabet includes characters outside of the a-z range.
If you take the regex subset that works uniformly across all regex engines (even for just perl-compatible engines), you would probably get nothing done. They all have some minor variations that make it impossible to write a regex for a particular engine without a reference sheet open nearby, even if you have years of experience writing them. And those 'shortcuts' like look-ahead and look-behind are often too useful to be neglected completely.
Crafting regexes is story of its own. The other commentor has described it. Just to summarize, regexes are fine for simple patterns. But their complexity explode as soon as you need to handle a lot of corner cases.
Here’s a regex crossword:
https://jimbly.github.io/regex-crossword/
See also: Are Regex Crosswords NP-hard?
https://cs.stackexchange.com/questions/30143/are-regex-cross...
In a previous job I've done some stupid tricks with regexes. Inside a MongoDB database I had documents with a version field in string form ("x.y.z") and I needed to exclude documents with a schema too old to process in my queries.
One can construct a regex that matches a number between x and y by enumerating all the digit patterns that fit the criteria. For example, the following pattern matches a number between 1 and 255: ^([1-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])$
This can be extended to match a version less than or equal to x.z.y by enumerating all the patterns across the different fields. The following pattern matches any version less than or equal to 2.14.0: ^([0-1]\.\d+\.\d+)|(2\.[0-9]\.\d+|(2\.1[0-3]\.\d+))$
Basically, I wrote a Java method that would generate a regex with all the patterns to match a version greater than or equal to a lower bound, which was then fed to MongoDB queries to exclude documents too old to process based on the version field. It was a stupid solution to a dumb problem, but it worked flawlessly.
I consider myself a reasonably competent senior engineer, and yet with regex this is what I have noticed:
Every time I need to write even the simplest regex, I can't seem to get it right the first time. I always need to struggle with it for a long time. Sometimes even using online tools takes me time to get it right. This happens every.single.time.
It baffles me to no end. I'm a pretty quick learner of pretty much everything I get into. I write the most sophisticated Typescript code you can imagine; I've written a small toy language; I've written biometric authentication drivers; I've written my own functional UI lib. But, I cannot master regex.
You can give me all the arguments about what is good about regex, but in my experience (which you can't argue with), it is a VERY badly designed API, and nothing will convince me otherwise. Regex is probably the worst thing ever in programming.
One can think of regex as very compact notation for writing text operations. It helps a lot.
The popular idea of them being write-only is obviously a joke, but it has some truth to it. On the good side, small code that needs to be rewritten is often better than large code that needs to be maintained.
For me, the main problem of the Regex syntax is the escaping rules: Many characters require escaping: \ { } ( ) [ ] | * + ? ^ $ . And the rules are different inside square brackets. I think it would be better if literal text is enclosed in quotes; that way, much less escaping is needed, but it would still be concise (and sometimes, more concise). I tried to formulate a proposal here: https://github.com/thomasmueller/bau-lang/blob/main/RegexV2....
One thing I noticed with the example `['0-9a-f']`
Doesn't this go against the "literals are enclosed in quotes" idea? In this case, you have a special character (`-`) inside a quoted string. IMO this would be more consistent: `['0'-'9''a'-'f'']`, maybe even have comma separation like `['0'-'9','a'-'f'']`. This would also allow you to include the character classes like `[d,'a'-'f'']` although that might be a little confusing if you're used to normal regex.
Thanks for reading and taking the time to respond!
> Doesn't this go against the "literals are enclosed in quotes" idea?
Sure, one could argue that other changes would also be useful, but then it would be less concise. I think the main reasons why people like regex are: (a) powerful, (b) concise.
For my V2 proposal, the new rule is: "literals are enclosed in quotes", the rule isn't "_only_ literals are enclosed in quotes" :-) In this case, I think `-` can be quoted as well. I wanted to keep the v2 syntax as close as possible to the existing syntax.
I tend to use regular expressions more commonly on the command line (looking for content in files, especially log files) than I do in code. But, that being said, I do use them in both cases. They're a tool and can be used well. But, like any other programming, you need to make sure your code is readable. Which (generally) means avoiding any really complex regular expressions.
I jump here just to say that non-greedy construction is valuable and not using them make expression harder to write and to understand.
My issue with regexes is that the formal definition of regex I learned at university is clear and simple [0] but then using them in programming languages is always a mess
[0] https://en.wikipedia.org/wiki/Regular_expression#Formal_lang...
Nothing is hard once you've learned to do it intuitively.
The hardest part is remembering how you struggled with it when you started.
It can help to learn what the “regular” part of regular expression refers to.
> NOTE: Some languages, like Rust, have parser combinators which can be as good or better than regex in most of the ways I care about.
What Rust feature is this referring to?
I mean sure, if it was my full-time job to write regexes I’d probably get pretty good at it. But instead a really complex one comes up maybe once a year for me and so I have to go to some online regex checker and start iteratively building one up, spending hours only find some condition where it doesn’t work and back to the checker...
So I don’t think it’s easy, but I do agree that they are very useful.
It's like a programming language inside a programming language.
I strongly agree with [^"] etc. over . and .?
Involves much less thinking!
what about "hello \"there\"" ?
Not sure what you are asking?
This is both a demo for the beauty and power of regexes, and of their dangers:
* The use of backslash separatores quickly makes a mess, as they tend to need escaping wherever regexes are usefull.
* The uppercase/lowercase is only right if there are no accented characters, so USA. This is bad in western europe in files where they are rare: Your program works for a while, then an accent sneaks in and breaks things.
* The exact meaning of all the specials like \( vs ( .
* Ranges work in most regex dialects but not everywhere.
* A simple regex for an int with a specific range is nasty. If you want a full float, good luck.
Regexes are great as initial filter or quick hack, but you need more in full size programs.
I'd love to see a better regex syntax, too.
The text on that ai generated image at the top is definitely... interesting
I am slow, why do you say this?
Honestly regex syntax is a mess. For example parentheses are used both for grouping alternatives and for capturing. I think Perl 6 tried (and failed) to fix this. Larger problem is you have to memorize the meta characters since they are basically random.
Regex is still the best solution I know of for its intended domain.
I’ve started using LLMs to identify the proper regex for my use cases. I’d like to see such regex creation as an LLM benchmark.
This is truly one thing AI solved. Hard to write, easy to test. No one needs to learn this convoluted syntax in the future and we're all better for it.
Nothing that LLMs produce today is good enough to bypass a developer who can judge whether it's correct or not.
I wonder if the problems people are pointing out with the examples (lowercase not being correct under various locales, IP address regex not being conformant etc) would be absent in code furnished by LLMs.
How would you know if a regex is correct if you dont understand it?
You have test strings covering all cases and they match accordingly? The same way you'd know when writing manually.
Covering all cases? How would that be possible? Even if we only consider ASCII strings, there are 16.000 possible two-character strings, 2 million possible three-character strings and so on.
I like the sentiment but I would make some very different choices. For instance, use the . operator, because it is easier to understand than his Rube-Goldberg-logic negation groups alternative.
He’s also strangely worried about portability. If you are really concerned about portability, you are moving between languages and you probably aren’t some novice who should be frightened by complexity.
I don’t think about portability at all, ever. And I do maintain code in Perl, Python, and Javascript.
But yeah, just as in all programming languages, you can get by with knowing about a 20% subset of all it can do.
[dead]
[flagged]