You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In distributed systems, shared beliefs, laws, systems of accountability and behavioral constraints enable large scale cooperation. Humans have this, digital systems do not have this yet. In most cases, humans are still required to mediate between programs.
Cooperation can only be achieved if systems can rely on each other, or if trust is not required in the first place. Sufficient security is a fundamental requirement for cooperation.
There are many strategies to limiting trust, including making the behavior of a system more predictable (deterministic), limiting access to internal and external resources (sandboxing) and having common, formally verifiable standards (well defined interfaces and protocols).
Machine cooperation is desirable. Think of all the devices you own. Right now, you can use only one at a time. While one is computing for you, the others are mostly idle. I think its time to change that. Not just for economic reasons, but because the concept of programs cooperating in your best interest, and jumping processes between machines are pretty cool.
In a few decades, tiny processors will be everywhere. We'll better have appropriate mechanisms that make them secure. The agoric papers detail market based cooperation mechanisms that could be build on them. And Capability Security shows how to make systems scale securely.
Technical advances
In the last decades, a few programming languages have introduced some interesting properties that enable one to run programs, while limiting the need to trust their behavior:
Limiting internal process resources
Running a program for a predetermined number of steps
Recursive sandboxing inside the language environment
Apart from the above features, it would also be nice to reduce trust in imported libraries and modules, without having to start another operating system process or virtual machine. In almost all languages, imported code shares the same identity and thus rights with the program that imported it. This makes large projects vulnerable to malicious third parties.
Of course, imported code should only have to trust code that it imports only as much as necessary as well. You can visualize this as a tree or list of running programs. A system that supports this is recursively sandboxable. Only this approach truly realizes the fundamental Security Principle of Least Authority:
Every program of the system should operate using the least amount of authority (access rights to resources) necessary to complete the job.
The best example, although on the operating system level, is Genode:
I couldn't find any systems (at least on the language, not operating system level) that had all of these properties, so I decided to create one.
What you can see in the following video is a proof of concept, a custom VM and a language that compiles down to it. A function can decide to call another function and execute it in a sandbox with resource limits. If the called function runs out of resources, control is returned back to the caller, who can either refuel it and resume the childs execution or discard it. The sandboxed function can sandbox another, and so on.
Calls to the external world have to pass through and be approved by the enclosing function callers.
The VM is fully deterministic (with the same inputs, the VM behaves the same, under any circumstances), single-threaded (control is only at one place at a time) and it gives parent processes full access to their child process snapshots. They can even send them to another machine, where they can be resumed.
Machine cooperation
In distributed systems, shared beliefs, laws, systems of accountability and behavioral constraints enable large scale cooperation. Humans have this, digital systems do not have this yet. In most cases, humans are still required to mediate between programs.
Cooperation can only be achieved if systems can rely on each other, or if trust is not required in the first place. Sufficient security is a fundamental requirement for cooperation.
There are many strategies to limiting trust, including making the behavior of a system more predictable (deterministic), limiting access to internal and external resources (sandboxing) and having common, formally verifiable standards (well defined interfaces and protocols).
Machine cooperation is desirable. Think of all the devices you own. Right now, you can use only one at a time. While one is computing for you, the others are mostly idle. I think its time to change that. Not just for economic reasons, but because the concept of programs cooperating in your best interest, and jumping processes between machines are pretty cool.
In a few decades, tiny processors will be everywhere. We'll better have appropriate mechanisms that make them secure. The agoric papers detail market based cooperation mechanisms that could be build on them. And Capability Security shows how to make systems scale securely.
Technical advances
In the last decades, a few programming languages have introduced some interesting properties that enable one to run programs, while limiting the need to trust their behavior:
Limiting internal process resources
Ethereum Virtual Machine
Limiting external resource access
Safe Tcl
Recursive sandboxing inside the language environment
Apart from the above features, it would also be nice to reduce trust in imported libraries and modules, without having to start another operating system process or virtual machine. In almost all languages, imported code shares the same identity and thus rights with the program that imported it. This makes large projects vulnerable to malicious third parties.
Of course, imported code should only have to trust code that it imports only as much as necessary as well. You can visualize this as a tree or list of running programs. A system that supports this is recursively sandboxable. Only this approach truly realizes the fundamental Security Principle of Least Authority:
The best example, although on the operating system level, is Genode:
https://genode.org/documentation/general-overview/index
Having full access to the virtual machine execution state
This is somewhat independent of the above features, but it allows for very interesting modes of cooperation:
Stackless Python
Demo
I couldn't find any systems (at least on the language, not operating system level) that had all of these properties, so I decided to create one.
What you can see in the following video is a proof of concept, a custom VM and a language that compiles down to it. A function can decide to call another function and execute it in a sandbox with resource limits. If the called function runs out of resources, control is returned back to the caller, who can either refuel it and resume the childs execution or discard it. The sandboxed function can sandbox another, and so on.
Calls to the external world have to pass through and be approved by the enclosing function callers.
The VM is fully deterministic (with the same inputs, the VM behaves the same, under any circumstances), single-threaded (control is only at one place at a time) and it gives parent processes full access to their child process snapshots. They can even send them to another machine, where they can be resumed.
https://www.youtube.com/watch?v=MBymOp6bTII
The next step involves implementing and demoing all of the above in WebAssembly, a virtual machine that is now supported in every major browser.
https://www.youtube.com/watch?v=_5vN9NKeLXE
See #18 for more
The text was updated successfully, but these errors were encountered: