Honestly no excuse for this to be done wrong on a $400 device. They should replace all deployed units for free. (Because you know these cost $20 to make.)
The price is not too surprising --- the margins are insane on many embedded development tools, although the stuff coming out of China is slowly trying to change that. The most expensive part of this device is the MCU which is around $15, but comparable ones from other manufacturers are <$5.
Plenty of manufacturers understand USB-C well enough to get this right.
Out of what must be dozens, I own a total of two devices, and have encountered one more, that have this kind of mistake in their design; except for the Raspberry Pi 4, I’m not surprised based on the quality and price of the device.
This means there’s a small microcontroller inside the cable with information on the cable’s capabilities.
More evidence that USB-C is an insanely overengineered spec. Cables should be dumb pipes, not devices with their own active circuitry. IMHO Ethernet, while not perfect, got this part right.
The only reason the signals don’t exactly match the one above is just the orientation of the cable.
This is one of the more perplexing design decisions. Was simply mirroring the contacts too simple for them? Did someone imagine a use-case where they wanted to detect the orientation of the cable? WTF.
So how’d you safely solve having cables of (widely) varying resistance?
I appreciate having the option to choose between a thin cable for charging only, and a pretty thick (and also heavy, inflexible, and relatively expensive) one for high-speed data transfer and charging.
The only alternative to that would seem to be having different ports per device, which is infeasible in many cases, and inflexible in most others.
> Was simply mirroring the contacts too simple for them?
That would require twice the number of contacts for the same number of physical wires, which would make the plugs even larger.
So how’d you safely solve having cables of (widely) varying resistance?
Smartphones already do this. They essentially measure the resistance of the cable by measuring the voltage and correlating it with the current drawn, and regulate the current accordingly.
That would require twice the number of contacts for the same number of physical wires, which would make the plugs even larger.
No. Look at the pinout. It's not symmetrical. They could've made it perfectly symmetrical.
>Smartphones already do this. They essentially measure the resistance of the cable by measuring the voltage and correlating it with the current drawn, and regulate the current accordingly.
No! Where did you hear this?
Smartphones usually either rely on type-c pd nowadays or do far less standard Samsung/Apple resistor on usb line check.
Some power banks also do terrible thing where they increase current pull until they see voltage starting to collapse but that’s to check for OC limits on load switch. However that’s really starting to go away because type c just works
It's been common for many years. Look for a document titled "Mediatek Pump Express Introduction". Another useful phrase to search for more information: "cable impedance detection". Apparently several patents on it too.
> Smartphones already do this. They essentially measure the resistance of the cable by measuring the voltage and correlating it with the current drawn, and regulate the current accordingly.
Not sufficient. It’s easy to have a cable where the long conductors can safely carry a lot more current than the terminations. Or one could have a 1 foot thin cable connected in series with a 10 foot thick cable.
For that matter, a short low-ampacity cable would look just like a long high-ampacity cable.
It's not resistance per se that you care about, it's how much the cable is heating up which is mostly dependent on resistance per meter.
Imagine 2 cables with the same resistance, one 0.5m long and the other 2m. At a set amount of amps, the 0.5m long cable would need to safely dissipate 4 times as much heat than the 2m cable over the same distance. And you don't know how long the cable actually is (and they make 10cm usb cables after all) so you can't make any decision based on resistance that doesn't risk some cables going up in flames.
Fortunately copper and aluminium, the two most common metals for cables, have resistance that increases with temperature.
Ultimately what I'm saying is that the endpoints have or can measure all the information they need to adapt, and this is even more accurate than requiring a cable to self-identify which is worthless if it's lying (and that can happen unintentionally if it's damaged or the connector is worn.)
Yes resistance goes up the warmer the cable gets, but you know neither the resistance of the cable at 20°C nor the temperature of the cable. Simple example, say the user is charging their phone, you detect the cable getting plugged in and measure the resistance at say 1 Ohm to make the numbers nice. Cool, now at what resistance do you determine the cable is too hot and reduce the current? Copper's temperature coefficient is about 0.4% per °C so the resistance should be 1.12 Ohm at a safe 30°C increase right?
Wrong! The cable resistance you measured could already have been at 50° and you're now pushing the cable into fire hazard territory. This isn't theoretical either, people plug in different devices after another all the time (this also eliminates any sort of memorizing if you were thinking about that). So what are you just going to wait 5 minutes at safe charging current and see if the temperature goes down? That's not going to fly in a market where some devices charge 50% in like 20 minutes.
And all of this is ignoring another important design goal of usb-c: dumb charge-only devices exist and should continue to function with a minimal of extra added components. USB-C allows this with the addition of two simple resistors that set the current you want. Measuring resistance on the other hand either requires a accurate voltage source on the device side (expensive) or somehow connecting the power line to some other line (how that would work without additional control electronics I have no idea).
The copper and aluminum will not become resistive enough to make a difference at temperatures low enough to prevent the rest of the cable from becoming crispy.
I am very happy that I can charge my phone and laptop from the same charger, and don't ever have to worry about whether the cable I'm using will be a fire hazard.
Right but consumers are much more likely to buy a cheap USB charge cable that lies about its rating (via internal chip) than an Ethernet cable that lies about its rating on the package.
C-to-C cables never have a resistor for identification; those are for devices or legacy cables/adapters to USB-A/B.
“Active” is also somewhat ambiguous: E-marked cables can be both passive or active in terms of what they do with the signal on the high speed data lines (such as amplifying it, converting it to optical etc.)
The reason for detecting the orientation of the connector is for higher speed communication. USB-C 20gbps uses both sets of pins on the connector to shotgun two usb3.2 10gbps to get 20gbps. That is why the technical spec name for 20gbps is "USB 3.2 gen 2x2". That is what the "x2" means.
Knowing that USB has this feature is follows that USB-C needs to be self orienting in case both ends of the connector plugged in different orientations.
You say Ethernet got this part right, well it got this part right by not having a reversible connector. Ethernet has 4 tx/rx pair and USB-C has 2 rx/tx pairs per usb 3 connection with 4 in total for 20gbps. The difference is reversibility. Is it worth the tradeoff?
> More evidence that USB-C is an insanely overengineered spec. Cables should be dumb pipes, not devices with their own active circuitry.
Why, to what end? They don't add any noticeable amount of cost to the cables and it's a whole lot better of a solution for consumers then requiring all cables to carry 5 amps and thus making them thicker.
> Did someone imagine a use-case where they wanted to detect the orientation of the cable?
They didn't imagine anything, USB3.0 5Gpbs with only 2 differential pairs, like you'd have in a usb-c to usb-a cable, requires this. And you can't just connect them together like you'd do for USB2, the resulting stubs degrade the signal too much.
Did I get this right? some cables have an MCU that gets power through a 1K series resistor on CC2? That MCU also sources ground through the cables GND?
The USB-C step is humongous, and hard to implement.
the complexity is high, but how else can you tell a cable that supports USB4 (40GBps) from one that’s only good for charging your phone (and everything in between)? users aren’t going to be able to tell the difference (using a cable with no data lines is already a super common issue with people getting into MCUs) so the device needs to be able to tell how much data and power the connected cable can distribute automatically.
this also why usb-c extension cables (M-F) aren’t spec complaint
it’s a real cool port, but the complexity demon is definitely present in the spec :)
I’m not sure if e.g. Displayport even has the capacity for link training (and there are USB-C to Displayport cables that have to support legacy devices that know nothing about USB); HDMI (until 2.2 or so) definitely does not.
It’s ok to not agree with the USB-IF’s tradeoffs in their solutions, but denying the complexity of the problem space can be a hint that you don’t sufficiently understand it to pass that kind of judgement.
Intel has a flow for how link training is done on DisplayPort.
Probably shouldn't be surprised but it involves communicating over the AUX channels. Is this something that a sizable % of computers can do? For some reason I thought aux channel was semi free for use, that it could be for Ethernet or USB in a pretty naked form. Didn't realize that needed mode switching?
Ah, so maybe DisplayPort has mandatory link training then, which would indeed allow unmarked cables.
But to GPs point, there still needs to be a way to tell the source that a given cable is a USB-C-to-DisplayPort one in the first place. So why not include the metadata on what signal grades it’s rated for in that same indicator? That’s exactly what e-markers are.
Could this be done in software? I'm very uneducated on the topic but doesn't your computer need to know the capabilities of a cable when it's plugged in? (USB speed, PD support, etc.)
At least for e-marked cables, this seems possible, and once the other side is plugged in at well, it should be clear to the controller what it is and what type of cable connects the two devices (only 2.0 cables supporting at least 3A/20V are allowed to not have a marker per the spec).
Honestly no excuse for this to be done wrong on a $400 device. They should replace all deployed units for free. (Because you know these cost $20 to make.)
TBF, I paid about twice that for the JLink model I have and I've gotten value for money. I'm not complaining.
Because no one really understands USB-C...
The price is not too surprising --- the margins are insane on many embedded development tools, although the stuff coming out of China is slowly trying to change that. The most expensive part of this device is the MCU which is around $15, but comparable ones from other manufacturers are <$5.
Plenty of manufacturers understand USB-C well enough to get this right.
Out of what must be dozens, I own a total of two devices, and have encountered one more, that have this kind of mistake in their design; except for the Raspberry Pi 4, I’m not surprised based on the quality and price of the device.
I'm sorry but the CC1/CC2 pullup issue is so widespread any electronics engineer getting that wrong now has no business designing USB devices
This means there’s a small microcontroller inside the cable with information on the cable’s capabilities.
More evidence that USB-C is an insanely overengineered spec. Cables should be dumb pipes, not devices with their own active circuitry. IMHO Ethernet, while not perfect, got this part right.
The only reason the signals don’t exactly match the one above is just the orientation of the cable.
This is one of the more perplexing design decisions. Was simply mirroring the contacts too simple for them? Did someone imagine a use-case where they wanted to detect the orientation of the cable? WTF.
So how’d you safely solve having cables of (widely) varying resistance?
I appreciate having the option to choose between a thin cable for charging only, and a pretty thick (and also heavy, inflexible, and relatively expensive) one for high-speed data transfer and charging.
The only alternative to that would seem to be having different ports per device, which is infeasible in many cases, and inflexible in most others.
> Was simply mirroring the contacts too simple for them?
That would require twice the number of contacts for the same number of physical wires, which would make the plugs even larger.
So how’d you safely solve having cables of (widely) varying resistance?
Smartphones already do this. They essentially measure the resistance of the cable by measuring the voltage and correlating it with the current drawn, and regulate the current accordingly.
That would require twice the number of contacts for the same number of physical wires, which would make the plugs even larger.
No. Look at the pinout. It's not symmetrical. They could've made it perfectly symmetrical.
>Smartphones already do this. They essentially measure the resistance of the cable by measuring the voltage and correlating it with the current drawn, and regulate the current accordingly.
No! Where did you hear this?
Smartphones usually either rely on type-c pd nowadays or do far less standard Samsung/Apple resistor on usb line check.
Some power banks also do terrible thing where they increase current pull until they see voltage starting to collapse but that’s to check for OC limits on load switch. However that’s really starting to go away because type c just works
It's been common for many years. Look for a document titled "Mediatek Pump Express Introduction". Another useful phrase to search for more information: "cable impedance detection". Apparently several patents on it too.
> Some power banks also do terrible thing where they increase current pull until they see voltage starting to collapse
If I’m not mistaken, that was explicitly allowed under USB Battery Charging. But chargers have to opt in to that via signaling on D+/D-.
> Smartphones already do this. They essentially measure the resistance of the cable by measuring the voltage and correlating it with the current drawn, and regulate the current accordingly.
Not sufficient. It’s easy to have a cable where the long conductors can safely carry a lot more current than the terminations. Or one could have a 1 foot thin cable connected in series with a 10 foot thick cable.
For that matter, a short low-ampacity cable would look just like a long high-ampacity cable.
On top of that, a slightly imprecise voltage source could also easily look like a very low resistance cable.
It's differential impedance that they measure, not the absolute voltage.
What would they measure that for?
To determine how much current they can draw.
They can only measure combined voltage sag of the source and resistive losses in the cable though that way, no?
Doesn’t help with the concern of high-resistance spots somewhere in a low-current-rated cable.
But resistance in series is additive - the resistance of the combination is dominated by the resistance of the worse cable: 0Ω + 1Ω = 1Ω
Not sure what problem you're thinking of.
It's not resistance per se that you care about, it's how much the cable is heating up which is mostly dependent on resistance per meter.
Imagine 2 cables with the same resistance, one 0.5m long and the other 2m. At a set amount of amps, the 0.5m long cable would need to safely dissipate 4 times as much heat than the 2m cable over the same distance. And you don't know how long the cable actually is (and they make 10cm usb cables after all) so you can't make any decision based on resistance that doesn't risk some cables going up in flames.
Fortunately copper and aluminium, the two most common metals for cables, have resistance that increases with temperature.
Ultimately what I'm saying is that the endpoints have or can measure all the information they need to adapt, and this is even more accurate than requiring a cable to self-identify which is worthless if it's lying (and that can happen unintentionally if it's damaged or the connector is worn.)
You're not thinking this through.
Yes resistance goes up the warmer the cable gets, but you know neither the resistance of the cable at 20°C nor the temperature of the cable. Simple example, say the user is charging their phone, you detect the cable getting plugged in and measure the resistance at say 1 Ohm to make the numbers nice. Cool, now at what resistance do you determine the cable is too hot and reduce the current? Copper's temperature coefficient is about 0.4% per °C so the resistance should be 1.12 Ohm at a safe 30°C increase right?
Wrong! The cable resistance you measured could already have been at 50° and you're now pushing the cable into fire hazard territory. This isn't theoretical either, people plug in different devices after another all the time (this also eliminates any sort of memorizing if you were thinking about that). So what are you just going to wait 5 minutes at safe charging current and see if the temperature goes down? That's not going to fly in a market where some devices charge 50% in like 20 minutes.
And all of this is ignoring another important design goal of usb-c: dumb charge-only devices exist and should continue to function with a minimal of extra added components. USB-C allows this with the addition of two simple resistors that set the current you want. Measuring resistance on the other hand either requires a accurate voltage source on the device side (expensive) or somehow connecting the power line to some other line (how that would work without additional control electronics I have no idea).
The copper and aluminum will not become resistive enough to make a difference at temperatures low enough to prevent the rest of the cable from becoming crispy.
Most of the resistance occurring in a small part of the entire cable, potentially causing a fire, seems possible, no?
> It's not symmetrical. They could've made it perfectly symmetrical.
Without internally connecting all corresponding pins to the same wire? To what benefit?
The parts that need to be symmetric for passive legacy devices already are.
That's not true, see this random pinout diagram: https://acroname.com/sites/default/files/shared/alt_mode_-_d...
Ethernet doesn't support 240 W.
I am very happy that I can charge my phone and laptop from the same charger, and don't ever have to worry about whether the cable I'm using will be a fire hazard.
PoE has been around for longer, and while it doesn't go that high (90W --- but at much longer distances), it uses passive cables just fine.
POE is an quite expensive thing to implement on a board. Flyback transformers are essentially required to support the standard.
At the cost of much higher complexity on each end. USB-C still supports completely passive old chargers and devices.
The cable can still be a fire hazard if it’s a cheap cable that lies about its rated wattage, no?
That goes for anything that lies about its rating though
Right but consumers are much more likely to buy a cheap USB charge cable that lies about its rating (via internal chip) than an Ethernet cable that lies about its rating on the package.
Do ethernet cables even specify a wattage?
> Cables should be dumb pipes, not devices with their own active circuitry.
You only need the active circuitry for high-power (240W) or high-speed (Thunderbolt) cables. Ordinary cables just need a single, cheap resistor.
C-to-C cables never have a resistor for identification; those are for devices or legacy cables/adapters to USB-A/B.
“Active” is also somewhat ambiguous: E-marked cables can be both passive or active in terms of what they do with the signal on the high speed data lines (such as amplifying it, converting it to optical etc.)
The reason for detecting the orientation of the connector is for higher speed communication. USB-C 20gbps uses both sets of pins on the connector to shotgun two usb3.2 10gbps to get 20gbps. That is why the technical spec name for 20gbps is "USB 3.2 gen 2x2". That is what the "x2" means.
Knowing that USB has this feature is follows that USB-C needs to be self orienting in case both ends of the connector plugged in different orientations.
You say Ethernet got this part right, well it got this part right by not having a reversible connector. Ethernet has 4 tx/rx pair and USB-C has 2 rx/tx pairs per usb 3 connection with 4 in total for 20gbps. The difference is reversibility. Is it worth the tradeoff?
Most if not all Ethernet NICs have auto crossover and polarity detection. Some tolerate arbitrary lane swaps too.
> More evidence that USB-C is an insanely overengineered spec. Cables should be dumb pipes, not devices with their own active circuitry.
Why, to what end? They don't add any noticeable amount of cost to the cables and it's a whole lot better of a solution for consumers then requiring all cables to carry 5 amps and thus making them thicker.
> Did someone imagine a use-case where they wanted to detect the orientation of the cable?
They didn't imagine anything, USB3.0 5Gpbs with only 2 differential pairs, like you'd have in a usb-c to usb-a cable, requires this. And you can't just connect them together like you'd do for USB2, the resulting stubs degrade the signal too much.
Same (or very similar) problem as for early Raspberry Pi, as far as I remember.
Did I get this right? some cables have an MCU that gets power through a 1K series resistor on CC2? That MCU also sources ground through the cables GND?
Yes and this is a requirement for many different connection modes. https://en.wikipedia.org/wiki/USB-C#E-Mark
The USB-C step is humongous, and hard to implement.
the complexity is high, but how else can you tell a cable that supports USB4 (40GBps) from one that’s only good for charging your phone (and everything in between)? users aren’t going to be able to tell the difference (using a cable with no data lines is already a super common issue with people getting into MCUs) so the device needs to be able to tell how much data and power the connected cable can distribute automatically.
this also why usb-c extension cables (M-F) aren’t spec complaint
it’s a real cool port, but the complexity demon is definitely present in the spec :)
but how else can you tell a cable that supports USB4 (40GBps) from one that’s only good for charging your phone (and everything in between)?
By attempting to link up at the highest supported speed and downshifting if there's no valid signal? Ethernet had this figured out decades ago.
USB-C doesn’t only carry USB data.
I’m not sure if e.g. Displayport even has the capacity for link training (and there are USB-C to Displayport cables that have to support legacy devices that know nothing about USB); HDMI (until 2.2 or so) definitely does not.
It’s ok to not agree with the USB-IF’s tradeoffs in their solutions, but denying the complexity of the problem space can be a hint that you don’t sufficiently understand it to pass that kind of judgement.
It's a protocol they designed, so they could do whatever they wanted between the initial linkup and carrying data, including link training.
but denying the complexity of the problem space can be a hint that you don’t sufficiently understand it
...or that I understand more of it simultaneously to see that it could be made much simpler.
They did in fact not design all protocols running over USB-C, as I’ve mentioned.
This allows it to work with other ports without putting a complete link speed protocol converter into the cable or adapter.
Intel has a flow for how link training is done on DisplayPort.
Probably shouldn't be surprised but it involves communicating over the AUX channels. Is this something that a sizable % of computers can do? For some reason I thought aux channel was semi free for use, that it could be for Ethernet or USB in a pretty naked form. Didn't realize that needed mode switching?
https://www.intel.com/content/www/us/en/docs/programmable/68...
Ah, so maybe DisplayPort has mandatory link training then, which would indeed allow unmarked cables.
But to GPs point, there still needs to be a way to tell the source that a given cable is a USB-C-to-DisplayPort one in the first place. So why not include the metadata on what signal grades it’s rated for in that same indicator? That’s exactly what e-markers are.
The 1k resistor just lets the device know there’s an MCU on that line. The device would then provide power on that pin before talking to it
USB3 cables are black magic, anyway.
Contrary to many other electronics standards, the USB-C spec is just a free download away.
Is there some device I can buy that lets me plug in a USB-C cable and tells me the capabilities of the cable?
I have a giant stack of cables and it would be nice to know what they can do.
https://a.co/d/2OMfGrg
Here you go, I got this guy and it’s fantastic.
Tells me pd capabilities, voltage/current and resistance.
Could this be done in software? I'm very uneducated on the topic but doesn't your computer need to know the capabilities of a cable when it's plugged in? (USB speed, PD support, etc.)
At least for e-marked cables, this seems possible, and once the other side is plugged in at well, it should be clear to the controller what it is and what type of cable connects the two devices (only 2.0 cables supporting at least 3A/20V are allowed to not have a marker per the spec).
No idea why this is not a thing on most devices.
https://hackaday.com/2023/08/11/usb-c-cable-tester-is-compac...
These sorts of devices can tell you how a cable is wired up, which is great for a a first pass or spot checking.
https://github.com/connection-information-suite/usb-connecti...
FNIRSI FNB58 might do what you want.