It was impacted. Amazon.com was unusable for me this morning. All I could get loaded was the header section and a blank page saying "Something went wrong."
DynamoDB is used *everywhere* in AMZN Retail, this is absolutely not surprising. Plus the vast majority of internal Services are using EC2 in the form of Apollo/ECS. So OP probably hit some parts of the site that are hosted in us-west-2. For all I know they started routing all requests for us-east-1 traffic to other DCs, figuring latency is a fine trade-off for availability
No idea if it's related, but this is the first time I've seen "We're sorry, Customer Service chat and phone lines are not currently available at the moment."
I've been getting message all morning about delivery delays. All were supposed to be delivered today and now they're anywhere between tomorrow and thursday.
Source: Former AWS employee. For the most part Amazon Retail doesn’t run on AWS infrastructure and doesn’t use AWS services. I’m simplifying a little bit. But Amazon (the company) runs two sets of infrastructure “AWS” and “CDO” (or COE I don’t remember).
It’s an old wives tale that AWS came out of “excess capacity” from Amazon Retail.
To clarify, most of CDO (Consumer Devices Other) does run on AWS in the sense that NAWS is the target state, MAWS is legacy and actively (slowly) being migrated off of. CDO (including Alexa) has been using DynamoDB/Lambda/Kinesis/SQS etc forever, its just the compute and kind-of network layers that are still MAWS. Even then, a large part of CDO has moved from Apollo to ECS/FarGate/whatever unholy Hex or DataPath thing they're pushing these days
It was impacted. Amazon.com was unusable for me this morning. All I could get loaded was the header section and a blank page saying "Something went wrong."
Same here, couldn't add things to cart or see the prices for a lot of things.
DynamoDB is used *everywhere* in AMZN Retail, this is absolutely not surprising. Plus the vast majority of internal Services are using EC2 in the form of Apollo/ECS. So OP probably hit some parts of the site that are hosted in us-west-2. For all I know they started routing all requests for us-east-1 traffic to other DCs, figuring latency is a fine trade-off for availability
No idea if it's related, but this is the first time I've seen "We're sorry, Customer Service chat and phone lines are not currently available at the moment."
I've been getting message all morning about delivery delays. All were supposed to be delivered today and now they're anywhere between tomorrow and thursday.
I was having trouble with the store this morning.
Some functions were down, e.g. order history.
It really depends on which data center you're connecting to, a few regions in the US were impacted earlier this morning.
Source: Former AWS employee. For the most part Amazon Retail doesn’t run on AWS infrastructure and doesn’t use AWS services. I’m simplifying a little bit. But Amazon (the company) runs two sets of infrastructure “AWS” and “CDO” (or COE I don’t remember).
It’s an old wives tale that AWS came out of “excess capacity” from Amazon Retail.
To clarify, most of CDO (Consumer Devices Other) does run on AWS in the sense that NAWS is the target state, MAWS is legacy and actively (slowly) being migrated off of. CDO (including Alexa) has been using DynamoDB/Lambda/Kinesis/SQS etc forever, its just the compute and kind-of network layers that are still MAWS. Even then, a large part of CDO has moved from Apollo to ECS/FarGate/whatever unholy Hex or DataPath thing they're pushing these days
Source: Ex-AMZN
I can't get any filtering functionality right now. The question isn't "why wasn't it impacted" and more like, why is it still degraded?
Multi-region failover?