This is a smart approach to API testing - capturing real production patterns is way more valuable than synthetic tests.
One question: how do you handle sensitive data in the captured traces? We've been working on API governance at toran.sh and found that policy enforcement during trace capture can be tricky - especially ensuring PII doesn't leak into test fixtures.
Thanks! Great question, we have a Transforms system that lets you define redaction rules (redact, mask, replace, or drop) using matchers with JSONPath support. Transforms are applied at capture time, so sensitive data never leaves your service boundary.
Currently Tusk Drift focuses on functional/regression testing - we mock outbound dependencies (DBs, external APIs) for determinism, so we're not measuring real-world performance characteristics today.
That said, we're also exploring extending it for capacity modeling and resource estimation, which would be a differentiated approach from traditional load testing. Synthetic benchmarks fail to capture how traffic patterns (not just volume) affect resource usage. Since we already record real production traffic, we're uniquely positioned to:
1. Replay specific time periods (e.g., last year's Black Friday sale)
2. Preserve the natural distribution of request types
3. Control downstream latency via our mock system
4. Build models beyond linear regression for QPS -> CPU/mem prediction
What performance testing use case did you have in mind? We're actively exploring this space.
This is a smart approach to API testing - capturing real production patterns is way more valuable than synthetic tests.
One question: how do you handle sensitive data in the captured traces? We've been working on API governance at toran.sh and found that policy enforcement during trace capture can be tricky - especially ensuring PII doesn't leak into test fixtures.
Great work on the trace replay mechanism!
Thanks! Great question, we have a Transforms system that lets you define redaction rules (redact, mask, replace, or drop) using matchers with JSONPath support. Transforms are applied at capture time, so sensitive data never leaves your service boundary.
Full docs here: https://docs.usetusk.ai/api-tests/pii-redaction/basic-concep...
Would love to hear what patterns you've found work well at Toran!
I'm sold on this. Have been looking for something similar!
Give it a spin and let us know what you think! :)
Building something adjacent to this field :https://voiden.md/.
Also,I loved the approach here!
Thank you!
Can this be used for performance testing as well or just functional testing?
Currently Tusk Drift focuses on functional/regression testing - we mock outbound dependencies (DBs, external APIs) for determinism, so we're not measuring real-world performance characteristics today.
That said, we're also exploring extending it for capacity modeling and resource estimation, which would be a differentiated approach from traditional load testing. Synthetic benchmarks fail to capture how traffic patterns (not just volume) affect resource usage. Since we already record real production traffic, we're uniquely positioned to:
1. Replay specific time periods (e.g., last year's Black Friday sale)
2. Preserve the natural distribution of request types
3. Control downstream latency via our mock system
4. Build models beyond linear regression for QPS -> CPU/mem prediction
What performance testing use case did you have in mind? We're actively exploring this space.