Been using Riza for the past few months at our startup for executing code generated by GPT4.
We use it for local dev, running prompt eval, and running code in prod.
- It was very fast to setup - it took us just a few minutes to execute our first function call.
- Multiple languages support - we use both JS and Python for code generated by LLM , Riza works great with both languages out of the box.
- No cold start - this is important because latency matters in our product.
- No infra management - even if we use AWS lambda or similar serverless product we felt like we still needed to a bunch of setup to make sure its fast + secure.
When I was at Retool (Hi Kyle), we wanted to create a way for users to write arbitrary code that got executed against their data in our cloud, e.g. using the MongoDB Node SDK to write Mongo queries in JS instead of only being able to use the pre-defined functions and form fields we had. Engineering researched it for a while, tried out Lambda and a few other things, but we never got there. This would have been sick
I mean, it's "run a subset that is supported by wasi" not "untrusted code"
Traceback (most recent call last):
File "/src/code.py", line 3, in <module>
print(os.listdir('/'))
^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 44] No such file or directory: '/'
OT1H, I do appreciate the sandbox-y nature of run fake code, but OTOH I would think a much less explodey way of shimming out untrusted code would be an in-memory filesystem so things blow up less
I guess put another way: who is the target audience for this?
> I would think a much less explodey way of shimming out untrusted code would be an in-memory filesystem so things blow up less
Filesystem access is the first item on our roadmap (https://docs.riza.io/reference/roadmap). If you want to see it in action, try opening /src/code.py. While adding an in-memory filesystem would be easy, we want to make it usable for reading and writing potentially large files.
> I guess put another way: who is the target audience for this?
Our customers are using our API to run LLM-generated code, build plugin systems, and power customer-defined data transformations.
Been using Riza for the past few months at our startup for executing code generated by GPT4.
We use it for local dev, running prompt eval, and running code in prod.
- It was very fast to setup - it took us just a few minutes to execute our first function call.
- Multiple languages support - we use both JS and Python for code generated by LLM , Riza works great with both languages out of the box.
- No cold start - this is important because latency matters in our product.
- No infra management - even if we use AWS lambda or similar serverless product we felt like we still needed to a bunch of setup to make sure its fast + secure.
Congrats on launching!
When I was at Retool (Hi Kyle), we wanted to create a way for users to write arbitrary code that got executed against their data in our cloud, e.g. using the MongoDB Node SDK to write Mongo queries in JS instead of only being able to use the pre-defined functions and form fields we had. Engineering researched it for a while, tried out Lambda and a few other things, but we never got there. This would have been sick
We think about the Retool case a lot. User-defined database connectors were an initial inspiration to start building Riza.
How does it compare to “just” using vagrant, docker, firecracker etc.
I mean, it's "run a subset that is supported by wasi" not "untrusted code"
OT1H, I do appreciate the sandbox-y nature of run fake code, but OTOH I would think a much less explodey way of shimming out untrusted code would be an in-memory filesystem so things blow up lessI guess put another way: who is the target audience for this?
> I would think a much less explodey way of shimming out untrusted code would be an in-memory filesystem so things blow up less
Filesystem access is the first item on our roadmap (https://docs.riza.io/reference/roadmap). If you want to see it in action, try opening /src/code.py. While adding an in-memory filesystem would be easy, we want to make it usable for reading and writing potentially large files.
> I guess put another way: who is the target audience for this?
Our customers are using our API to run LLM-generated code, build plugin systems, and power customer-defined data transformations.