Home
Biscuit is an open-source, token-based authorization system.
With Biscuit, you can:
- allow decentralized verification through public key cryptography
- allow offline attenuation where, from each token, a new one with narrower rights can be generated
- create strong security policy enforcement based on a logic language
Biscuit can be used with the command line and support is available in Rust, Haskell, Go, Java, WebAssembly and C.
Getting Started
Introduction
Biscuit is a set of building blocks for your authorization layer. By making a coherent experience from the authorization token up to the tools to write policies, it spares you the time spent binding together token scopes, authorization servers, and making sure authorization policies execute correctly in every service. You only need to focus on writing, debugging and deploying your rules.
Biscuit is a bearer token
One of those building blocks is an authorization token that is signed with public key cryptography (like JWT), so that any service knowing the public key can verify the token. The Biscuit token can be transported along with a request, in a cookie, authorization header, or any other mean. It can be stored as binary data, or base64 encoded. It is designed to be small enough for use in most protocols, and fast to verify to keep a low overhead in authorization.
The Biscuit token holds cryptographically signed data indicating the holder's basic rights, and additional constraints on the request. As an example, the token could define its use for read-only operations, or from a specific IP address.
Here is what a biscuit looks like: the left-hand side shows you the encoded token, while the right-hand side shows its contents. The first block (called the authority block) gives us what the token grants access to. The other two blocks restrict how the token can be used. Only the authority block can be created by the token emitter, while the other blocks can be freely added by intermediate parties (offline attenuation).
Biscuit also supports offline attenuation (like Macaroons). Meaning that from a Biscuit token, you can create a new one with more restrictions, without communicating with the service that created the token. The token can only be restricted, it will never gain more rights.
With that, you could have a token carried along with a series of requests between microservices, with the token reducing its rights as it goes deeper in the system. Or you could get a token from a git repository hosting service and attenuate it to just the set of rights needed for usage in CI. Offline attenuation unlocks powerful delegation patterns, without needing to support them directly in the origin service.
For examples of token attenuation, see:
Biscuit is a policy language
Authorization policies are written in a logic language derived from Datalog. Logic languages are well suited for authorization, because they can represent complex relations between elements (like roles, groups, hierarchies) concisely, and efficiently explore and combine multiple rules.
Biscuit's authorization rules can be provided by the authorizer's side, but also by the token. While the token can carry data, it can also contain "checks", conditions that the request must fulfill to be accepted. This is the main mechanism for attenuation: take an existing token, add a check for the current date (expiration) or the operation (restrict to read only).
Those authorization policies can be hardcoded in your application or be dynamically generated based on context.
Authorization policy example
// We receive a request to read "admin.doc"
// The request contains a token with the following content
user("1234"); // the user is identified as "1234"
check if operation("read"); // the token is restricted to read-only operations
// The authorizer loads facts representing the request
resource("admin.doc");
operation("read");
// The authorizer loads the user's rights
right("1234", "admin.doc", "read");
right("1234", "admin.doc", "write");
// Finally, the authorizer tests policies
// by looking for a set of facts matching them
allow if
user($user_id),
resource($res),
operation($op),
right($user_id, $res, $op);
Biscuit is so much more
Biscuit also comes with a command line application to create, attenuate, inspect and authorize tokens, an online playground for Datalog policies, and web assembly components to make frontend tools around policies development.
To sum up, Biscuit provides tools to build a complete, cross platform authorization system:
- an authorization token, verified by public key cryptography, that supports offline attenuation
- a logic language based on Datalog to write authorization policies
- a server side library, available for multiple languages, to write authorizers in your applications
- a command line application to create, attenuate, inspect and authorize tokens
- WebAssembly components to create, attenuate, inspect and authorize tokens, as well as to write and debug authorization policies
Going further
My First Biscuit: Create and verify your first biscuit in a step-by-step guide.
Datalog Reference: Learn about the logic language who's powering biscuits.
Recipes: Have a look at different ways to use biscuits to implement your security policies.
How to Contribute: Find out how to contribute to Biscuit.
My first biscuit
Creating a biscuit
Creating a biscuit requires two things:
- a private key that will allow receiving parties to trust the biscuit contents
- an authority block carrying information (and possibly restrictions)
Creating a private key
The private key can be generated with the biscuit CLI:
β― biscuit keypair
Generating a new random keypair
Private key: 473b5189232f3f597b5c2f3f9b0d5e28b1ee4e7cce67ec6b7fbf5984157a6b97
Public key: 41e77e842e5c952a29233992dc8ebbedd2d83291a89bb0eec34457e723a69526
The private key is used to generate biscuits, while the public key can be distributed to all services who will use biscuits to authorize requests.
Creating a biscuit token
The most important part of a biscuit is its authority block. It contains data that is signed with the private key, and that can be trusted by receiving parties. The authority block is declared in datalog. Datalog is a declarative logic language that is a subset of Prolog. A Datalog program contains "facts", which represent data, and "rules", which can generate new facts from existing ones.
In our example, we will create a token that identifies its carrier as a user whose user id is "1234"
.
To do so, we will create a file named authority.biscuit-datalog
, with the following contents:
authority.biscuit-datalog
user("1234");
This is a datalog fact: the fact name is user
, and it has a single attribute ("1234"
). Facts can have several attributes, of various types (ints, strings, booleans, byte arrays, dates, sets).
Now we have a private key and an authority block, we can go ahead and generate a biscuit:
β― biscuit generate --private-key 473b5189232f3f597b5c2f3f9b0d5e28b1ee4e7cce67ec6b7fbf5984157a6b97 authority.biscuit-datalog
En0KEwoEMTIzNBgDIgkKBwgKEgMYgAgSJAgAEiBw-OHV3egI0IVjiC1vdB7WZ__t0FCvB2s-81PexdwuqxpAolMr9XDP7T44qgdXxtumc2P3O93pCHaGSuBUs3_f8nsQJ7NU6PdkujZIMStzEJ36CDnxawSZjUAKoTO-a1cCDSIiCiBPsG53WHcpxeydjSpFYNYnvPAeM1tVBvOEG9SQgMrzbw==
You can inspect the generated biscuit with biscuit inspect
:
β― biscuit inspect -
Please input a base64-encoded biscuit, followed by <enter> and ^D
En0KEwoEMTIzNBgDIgkKBwgKEgMYgAgSJAgAEiBw-OHV3egI0IVjiC1vdB7WZ__t0FCvB2s-81PexdwuqxpAolMr9XDP7T44qgdXxtumc2P3O93pCHaGSuBUs3_f8nsQJ7NU6PdkujZIMStzEJ36CDnxawSZjUAKoTO-a1cCDSIiCiBPsG53WHcpxeydjSpFYNYnvPAeM1tVBvOEG9SQgMrzbw==
Authority block:
== Datalog ==
user("1234");
== Revocation id ==
a2532bf570cfed3e38aa0757c6dba67363f73bdde90876864ae054b37fdff27b1027b354e8f764ba3648312b73109dfa0839f16b04998d400aa133be6b57020d
==========
π Public key check skipped π
π Datalog check skipped π‘οΈ
Biscuit also provides web components that let you inspect biscuits in the browser:
Authorizing a biscuit
Now that we have a biscuit, let's have a look at how a service can authorize a request based on a biscuit.
To do so, the service provides an authorizer, built with:
- facts about the request (current time, resource being accessed, type of the operation)
- facts or rules about access control (ACLs, access matrix)
- checks to apply some restrictions (every check has to pass for the authorization to succeed)
- policies, which are tried in order, the first one to match decides if the authorization passes or fails
In our case, we'll assume the token is used for a write
operation on the resource1
resource.
authorizer.biscuit-datalog
// request-specific data
operation("write");
resource("resource1");
time(2021-12-21T20:00:00Z);
// server-side ACLs
right("1234", "resource1", "read");
right("1234", "resource1", "write");
right("1234", "resource2", "read");
is_allowed($user, $res, $op) <-
user($user),
resource($res),
operation($op),
right($user, $res, $op);
// the request can go through if the current user
// is allowed to perform the current operation
// on the current resource
allow if is_allowed($user, $resource, $op);
There's a bit more happening here: the first three facts give info about the request. Then we have ACLs (they can be declared statically for a small, static user base, or fetched from DB based on the token user).
is_allowed
is more interesting: it's a rule. If, given a user, a resource and an operation, there's a right
fact that puts them all together, then we know the request can go through.
With all that done, we can go ahead and check our biscuit:
β― biscuit inspect - --verify-with-file authorizer.datalog --public-key 41e77e842e5c952a29233992dc8ebbedd2d83291a89bb0eec34457e723a69526
Please input a base64-encoded biscuit, followed by <enter> and ^D
En0KEwoEMTIzNBgDIgkKBwgKEgMYgAgSJAgAEiBw-OHV3egI0IVjiC1vdB7WZ__t0FCvB2s-81PexdwuqxpAolMr9XDP7T44qgdXxtumc2P3O93pCHaGSuBUs3_f8nsQJ7NU6PdkujZIMStzEJ36CDnxawSZjUAKoTO-a1cCDSIiCiBPsG53WHcpxeydjSpFYNYnvPAeM1tVBvOEG9SQgMrzbw==
Authority block:
== Datalog ==
user("1234");
== Revocation id ==
a2532bf570cfed3e38aa0757c6dba67363f73bdde90876864ae054b37fdff27b1027b354e8f764ba3648312b73109dfa0839f16b04998d400aa133be6b57020d
==========
β
Public key check succeeded π
β
Authorizer check succeeded π‘οΈ
Matched allow policy: allow if is_allowed($user, $resource, $op)
// request-specific data
operation("write");
resource("resource1");
time(2021-12-21T20:00:00Z);
// server-side ACLs
right("1234", "resource1", "read");
right("1234", "resource1", "write");
right("1234", "resource2", "read");
is_allowed($user, $res, $op) <-
user($user),
resource($res),
operation($op),
right($user, $res, $op);
// the request can go through if the current user
// is allowed to perform the current operation
// on the current resource
allow if is_allowed($user, $resource, $op);
The CLI checks the biscuit signatures, and then the datalog engine will try to match policies. Here, it succeeded, and the CLI shows you the policy that matched.
Attenuating a biscuit
One of biscuit's strengths is the ability to attenuate tokens, restricting their use.
Attenuating a biscuit token is done by appending a block containing a check.
Let's attenuate our first token by adding a TTL (Time To Live) check: this way the new
token will only be usable for a given period of time. In the authorizer above, we provided
a time
fact, that was not used in a policy or a check. We can add a block that will make
sure the token is not used after a certain date.
block1.biscuit-datalog
check if time($time), $time <= 2021-12-20T00:00:00Z;
The check requires two things to suceed: first, the current time must be declared through the time()
fact, and the current time must be smaller than 2021-12-20T00:00:00Z
.
We can create a new token by appending this block to our existing token:
β― biscuit attenuate - --block-file 'block1.biscuit-datalog'
Please input a base64-encoded biscuit, followed by <enter> and ^D
En0KEwoEMTIzNBgDIgkKBwgKEgMYgAgSJAgAEiBw-OHV3egI0IVjiC1vdB7WZ__t0FCvB2s-81PexdwuqxpAolMr9XDP7T44qgdXxtumc2P3O93pCHaGSuBUs3_f8nsQJ7NU6PdkujZIMStzEJ36CDnxawSZjUAKoTO-a1cCDSIiCiBPsG53WHcpxeydjSpFYNYnvPAeM1tVBvOEG9SQgMrzbw==
En0KEwoEMTIzNBgDIgkKBwgKEgMYgAgSJAgAEiBw-OHV3egI0IVjiC1vdB7WZ__t0FCvB2s-81PexdwuqxpAolMr9XDP7T44qgdXxtumc2P3O93pCHaGSuBUs3_f8nsQJ7NU6PdkujZIMStzEJ36CDnxawSZjUAKoTO-a1cCDRqUAQoqGAMyJgokCgIIGxIGCAUSAggFGhYKBAoCCAUKCAoGIICP_40GCgQaAggCEiQIABIgkzpUMZubXcd8K7mWNchjb0D2QXeYoWtlZw2KMryKubUaQOFlx4iPKUqKeJrEH4MKO7tjM3H9z1rYbOj-gKGTtYJ4bac0kIoWl9v_7q7qN7fQJJgj0IU4jx4_QhxIk9SeigMiIgogqvHkuXrYkoMRvKgT9zNV4BEKC5W2K8L7NcGiX44ASwE=
Now, let's try to check it again (pay special attention to the time
fact we added in the authorizer):
// request-specific data
operation("write");
resource("resource1");
time(2021-12-21T20:00:00Z);
// server-side ACLs
right("1234", "resource1", "read");
right("1234", "resource1", "write");
right("1234", "resource2", "read");
is_allowed($user, $res, $op) <-
user($user),
resource($res),
operation($op),
right($user, $res, $op);
// the request can go through if the current user
// is allowed to perform the current operation
// on the current resource
allow if is_allowed($user, $resource, $op);
β― biscuit inspect - --verify-with-file authorizer.datalog --public-key 41e77e842e5c952a29233992dc8ebbedd2d83291a89bb0eec34457e723a69526
Please input a base64-encoded biscuit, followed by <enter> and ^D
En0KEwoEMTIzNBgDIgkKBwgKEgMYgAgSJAgAEiBw-OHV3egI0IVjiC1vdB7WZ__t0FCvB2s-81PexdwuqxpAolMr9XDP7T44qgdXxtumc2P3O93pCHaGSuBUs3_f8nsQJ7NU6PdkujZIMStzEJ36CDnxawSZjUAKoTO-a1cCDRqUAQoqGAMyJgokCgIIGxIGCAUSAggFGhYKBAoCCAUKCAoGIICP_40GCgQaAggCEiQIABIgkzpUMZubXcd8K7mWNchjb0D2QXeYoWtlZw2KMryKubUaQOFlx4iPKUqKeJrEH4MKO7tjM3H9z1rYbOj-gKGTtYJ4bac0kIoWl9v_7q7qN7fQJJgj0IU4jx4_QhxIk9SeigMiIgogqvHkuXrYkoMRvKgT9zNV4BEKC5W2K8L7NcGiX44ASwE=
Authority block:
== Datalog ==
user("1234");
== Revocation id ==
a2532bf570cfed3e38aa0757c6dba67363f73bdde90876864ae054b37fdff27b1027b354e8f764ba3648312b73109dfa0839f16b04998d400aa133be6b57020d
==========
Block nΒ°1:
== Datalog ==
check if time($time), $time <= 2021-12-20T00:00:00Z;
== Revocation id ==
e165c7888f294a8a789ac41f830a3bbb633371fdcf5ad86ce8fe80a193b582786da734908a1697dbffeeaeea37b7d0249823d085388f1e3f421c4893d49e8a03
==========
β
Public key check succeeded π
β Authorizer check failed π‘οΈ
An allow policy matched: allow if is_allowed($user, $resource, $op)
The following checks failed:
Block 1 check: check if time($time), $time <= 2021-12-20T00:00:00Z
Here it failed because the date provided in the authorizer (time(2021-12-21T20:00:00Z)
) is greater
than the expiration date specified in the check (check if time($time), $time <= 2021-12-20T00:00:00+00:00
).
Going further
You can learn more about datalog by reading the datalog reference.
Authorization policies
Datalog authorization policies
A Biscuit token could be verified by applications in various languages. To make sure that authorization policies are interpreted the same way everywhere, and to avoid brittle solutions based on custom parsers of text fields, Biscuit specifies an authorization language inspired from Datalog, that must be parsed and executed identically by every implementation.
Logic languages are well suited for authorization policies, because they can represent complex relations between elements (like roles, groups, hierarchies) concisely, and efficiently explore and combine multiple rules.
Biscuit's language loads facts, data that can comes from the token (user id), from the request (file name, read or write access, current date) or the application's internal databases (users, roles, rights).
Then it uses those facts to decide whether the request is allowed to go trough. It does so through two mechanisms:
- a check list: each check validates the presence of one or more facts. Every check must succeed for the request to be allowed.
Example:
check if time($time), $time < 2022-01-01T00:00:00Z
for an expiration date. - allow/deny policies: a list of policies that are tried in sequence until one of them matches. If it is an allow policy, the request is accepted, while if it is a deny policy the request is denied. If no policy matches, the request is also denied.
Example:
allow if resource($res), operation($op), right($res, $op)
.
Allow/deny policies can only be defined in the application, while checks can come from the application or the token: tokens can only add restrictions (through checks), while only the application can approve a token (by defining an allow
policy).
Tokens can be attenuated by appending a block containing checks.
First code example
Here we model an application allowing read or write access to files. It issues API tokens to logged-in users, and those tokens can be scoped to only allow specific operations.
Let's consider a user whose user id is "1234"
, and who has generated a token which is only allowed to perform a read operation on .txt
files.
The user then issues the following HTTP request on the service API: GET /files/file1.txt
.
Here is how the scenario can be expressed with datalog (the example is interactive, feel free to make changes and try to guess their outcome):
// the token contains information about its holder
user("1234");
// the token contains checks:
// it is only usable for read operations
check if operation("read");
// it is only usable on txt files.
check if resource($file), $file.ends_with(".txt");
// the application provides context about the request:
resource("file1.txt"); // based on the request path
operation("read"); // based on the request HTTP method
// the application only accepts tokens which contain user information
allow if user($u);
It is important to remember that fact names (user
, resource
, operation
) don't have a specific meaning within datalog. As long as facts names are consistent between facts and checks / policies, they can be named freely (as long as the name starts with a letter and contains only letters, digits, _
or :
).
Datalog in Biscuit
While this page gives an overview of how datalog works and can be used to describe access control, the complete datalog reference is available for a detailed description of the datalog engine inner workings, as well as a list of all available functions and operations.
Checks
The first part of the authorization logic comes with checks. They are queries over the Datalog facts. If the query produces something, if the underlying rule generates one or more facts, the check is validated. If the query does not produce anything, the check fails. For a token verification to be successful, all of the checks must succeed.
In the previous example, there are two checks:
// the token contains checks:
// it is only usable for read operations
check if operation("read");
// it is only usable on txt files.
check if resource($file), $file.ends_with(".txt");
The first one ensures that the fact operation("read")
is present. This kind of fact (information about the request) is often called an ambient fact. Common ambient facts are resource(β¦)
(the resource being accessed), operation(β¦)
(the operation being attempted), time(β¦)
(the datetime at which the request has been received).
The second check is a bit more sophisticated: instead of matching an exact fact, it starts by matching any fact named resource()
, and binds a variable named $file
to the actual resource name. It then checks that the resource name ends with ".txt"
. Here, $file.ends_with(".txt")
is an expression. For the check to be valid,
all the expressions it contains must evaluate to true
.
Checks can contain several predicates (something matching on facts and introducing variables) and several expressions:
- all the predicates must match existing facts
- if a variable appears several times, all the values must match
- all the expressions must evaluate to
true
Let's illustrate this with an example: a check that ensures that a user can perform an operation on a resource only if explicitly allowed by a corresponding fact right()
.
The check also ensures that the operation is either read
or create
.
user("1234");
operation("read");
resource("file1.txt");
right("1234", "file1.txt", "read");
check if user($u),
operation($o), resource($r), right($u, $r, $o),
["read", "create"].contains($o);
allow if user($u);
Allow and deny policies
The validation in Biscuit relies on a list of allow or deny policies that are evaluated after all of the checks have succeeded. Like checks, they are queries that must find a matching set of facts to succeed. If they do not match, we try the next one. If they succeed, an allow policy will make the request validation succeed, while a deny policy will make it fail. If no policy matched, the validation will fail.
Policies allow to declare a series of alternatives, in descending priorities. It is useful when several authorization paths are available. This is different from checks, which all must succeed. You can think of it as such:
- checks are combined with and;
- policies are combined with or.
Here, the request is authorized if the token holder has the corresponding right declared, or if the token carries a special admin(true)
fact.
user("1234");
// uncomment and see what happens, then try to remove the `user` fact, or the `right` fact
// admin(true);
operation("read");
resource("file1.txt");
right("1234", "file1.txt", "read");
allow if user($u),
operation($o), resource($r), right($u, $r, $o);
allow if admin(true);
A common pattern is to only use checks for authorization. In that case a single allow if true
policy will be necessary for authorization to go through.
Blocks
A token is made of blocks of cryptographically verified data. Each token has at least one block called the authority block. Only the authority block is created and signed by the token emitter, while other blocks can be freely added by intermediate parties. By default, blocks added after the authority block are self-contained and can only restrict what the token can do.
A block can contain:
facts
: They represent data. Each block can define new facts.rules
: They can generate new facts from existing ones. Each block can define new rules.checks
: They are queries that need to match in order to make the biscuit valid. Each block can define new checks.
In most cases, the purpose of a block is to add checks that depend on facts provided by the authorizer.
Here is how security is guaranteed:
- All the facts and rules from the token are loaded in the datalog engine; they are tied to the block that defined them.
- All the facts and rules from the authorizer are loaded in the datalog engine.
- Rules are repeatedly applied until no new fact is generated. By default, rules are only applied on facts defined in the authority block, the authorizer or the block that defined the rule. This way, facts defined in a non-authority block can only be seen from the block itself.
- Checks are applied on the facts. By default, checks are only applied on facts defined in the authority block, the authorizer or the block that defined the check. This way, facts defined in a non-authority block can only fulfil checks from the same block.
- Authorizer policies are applied on the facts. By default, policies are only applied on facts defined in the authority block or the authorizer. This way, facts defined in a non-authority block cannot fulfil authorizer policies.
This model guarantees that adding a block can only restrict what a token can do: by default, the only effect of adding a block to a token is to add new checks.
// the token emitter grants read access to file1
right("file1", "read");
// the authority block trusts facts from itself and the authorizer
check if action("read");
right("file2", "read");
// blocks trust facts from the authority block and the authorizer
check if action("read");
// blocks trust their own facts
check if right("file2", "read");
resource("file1");
action("read");
// the authorizer does not trust facts from additional blocks
check if right("file2", "read");
// the authorizer trusts facts from the authority block
check if right("file1", "read");
allow if true;
It is possible for a rule, a check or a policy to consider facts defined in non-authority third-party blocks by explicitly providing the external public part of the keypair that signed the block. This allows considering facts from a non-authority block while still making sure they come from a trusted party.
Example tokens
Let's make an example from an S3-like application on which we can store and retrieve files, with users having access to "buckets" holding a list of files.
Here is a first example token, that will hold a user id. This token only contains one block, that has been signed with the root private key. The authorizer's side knows the root public key and, upon receiving the request, will deserialize the token and verify its signature, thus authenticating the token.
Here the token carries a single block, authority
, that is the initial block containing basic rights, which can be refined in subsequent blocks.
Let's assume the user is sending this token with a PUT /bucket_5678/folder1/hello.txt
HTTP request. The authorizer would then load the token's facts and rules, along with facts from the request:
user("1234");
operation("write");
resource("bucket_5678", "/folder1/hello.txt");
time(2020-11-17T12:00:00+00:00);
The authorizer would also be able to load authorization data from its database, like ownership information: owner("1234", "bucket_1234")
, owner("1234", "bucket_5678")
owner("ABCD", "bucket_ABCD")
. In practice, this data could be filtered by limiting it to facts related to the current ressource, or extracting the user id from the token with a query.
The authorizer can also load its own rules, like creating one specifying rights if we own a specific folder:
// the resource owner has all rights on the resource
right($bucket, $path, $operation) <-
resource($bucket, $path),
operation($operation),
user($id),
owner($id, $bucket)
This rule will generate a right
fact if it finds data matching the variables.
We end up with a system with the following facts:
user("1234");
operation("write");
resource("bucket_5678", "/folder1/hello.txt");
time(2020-11-17T12:00:00+00:00);
owner("1234", "bucket_1234");
owner("1234", "bucket_5678");
owner("ABCD", "bucket_ABCD");
right("bucket_5678", "/folder1/hello.txt", "write");
At last, the authorizer provides a policy to test that we have the rights for this operation:
allow if
right($bucket, $path, $operation),
resource($bucket, $path),
operation($operation);
Here we can find matching facts, so the request succeeds. If the request was done on bucket_ABCD
, we would not be able to generate the right
fact for it and the request would fail.
Now, what if we wanted to limit access to reading /folder1/hello.txt
in bucket_5678
?
We could ask the authorization server to generate a token with only that specific access:
Without a user
, the authorizer would be unable to generate more right
facts and would only have the one provided by the token.
But we could also take the first token, and restrict it by adding a block containing a new check:
With that token, if the holder tried to do a PUT /bucket_5678/folder1/hello.txt
request, we would end up with the following facts:
user("1234");
operation("write");
resource("bucket_5678", "/folder1/hello.txt");
time(2020-11-17T12:00:00+00:00);
owner("1234", "bucket_1234");
owner("1234", "bucket_5678");
owner("ABCD", "bucket_ABCD");
right("bucket_5678", "/folder1/hello.txt", "write");
The authorizer's policy would still succeed, but the check from block 1 would fail because it cannot find operation("read")
.
By playing with the facts provided on the token and authorizer sides, generating data through rules, and restricting access with a series of checks, it is possible to build powerful rights management systems, with fine grained controls, in a small, cryptographically secured token.
Usage
Command Line
Install
From pre-built packages
Pre-built packages are available: https://github.com/biscuit-auth/biscuit-cli/releases/latest
With cargo
cargo install biscuit-cli
From source
git clone https://github.com/biscuit-auth/biscuit-cli.git
cd biscuit-cli
cargo install --path .
Create a key pair
$ # this will output the keypair, you can then copy/paste the components
$ biscuit keypair
> Generating a new random keypair
> Private key: 4aa4bae701c6eb05cfe0bdd68d5fab236fc0d0d3dcb2a9b582a0d87b23e04500
> Public key: 687b536c502f10f5978eee2d0c04f2869d15cf7858983dc50b6729b15e203809
$ # this will save the private key to a file so you can use it later
$ biscuit keypair --only-private-key > private-key-file
$ cat private-key-file
> e4d17ae4fd444ace42ab0a813c242643cf9b4ef96ca07c502e8e72142a3e8a2e
Generate a public key from a private key
$ biscuit keypair --from-private-key-file private-key-file --only-public-key
> 51c20fb821f7d6a3939fba5c80f0915d80087799de6988a3259c6782bea93d7f
$ biscuit keypair --from-private-key-file private-key-file --only-public-key > public-key-file
Create a token
$ # this will open your text editor and let you type in the authority block as datalog
$ biscuit generate --private-key-file private-key-file
> ChcIADgBQhEKDwgEEgIIABIHIgVmaWxlMRoglMviMbBdrIrlVsOaPNw9EhA62e1VAO2mCYxg5mcr-FgiRAogKAZh5JjRh6n3UTQIVlptzWsAhj92UaOjWZQOVYYqaTASIFG7bXx0Y35LjRWcJHs7N6CAEOBJOuuainDg4Rg_S8IG
$ cat << EOF > authority-block
right("file1");
EOF
$ # this will read the authority block from a file
$ biscuit generate --private-key-file private-key-file authority-block
> En4KFAoFZmlsZTEYAyIJCgcIBBIDGIAIEiQIABIgyOeDz8eTDEWRtx5NBlsL_ajPBg2CmhLj_xylsxpyaPQaQNXM41V4wk-NGskgvcV6ygh1xL7CqxE51urXKqC81DvEkBNxYlr-cgq2hr0M13pLFxc0pKontpWYQiESNXIa9AEiIgog5v8ptssVfc3ES9eDArruxmaOBRm0n95SitePxoMzFPk=
$ # this will read the authority block from standard input
$ echo 'right("file1");' | biscuit generate --private-key-file private-key-file -
> En4KFAoFZmlsZTEYAyIJCgcIBBIDGIAIEiQIABIgtuIug-thwbWXD8Kt8UqQJCiqe80n4527AiyOV7drwvgaQCpDRNl7dsjBwGzqJMh2qHz2Az6b15kczqkVhJjuKabvZ0q5h_dhVxjYdxMvTJNrL-AictItXU4aqngpIHyLsAciIgog1YhpZ9b8mLfZRW-Id2qLfwNFK2O5Nd4Xa9t9ffnQGeA=
$ # the biscuit can be generated as raw bytes, with no b64 encoding
$ echo 'right("file1");' | biscuit generate --raw --private-key-file private-key-file - > biscuit-file.bc
Inspect a token
$ biscuit inspect --raw-input biscuit-file.bc --public-key-file public-key-file
> Open biscuit
> Authority block:
> == Datalog ==
> right("file1");
>
> == Revocation id ==
> a1675990f0b23015019a49b6b003c14fcfd2be134c9899b8146f4f702f8089486ca20766e188cd3388eb8ef653327a78e2dc0f6e42d31be8d97b1c5a8488eb0e
==========
β
Public key check succeeded π
π Datalog check skipped π‘οΈ
Authorize a token
$ biscuit inspect --raw-input biscuit-file.bc \
--public-key-file public-key-file \
--authorize-with 'allow if right("file1");' \
--include-time
> Open biscuit
> Authority block:
> == Datalog ==
> right("file1");
>
> == Revocation id ==
> a1675990f0b23015019a49b6b003c14fcfd2be134c9899b8146f4f702f8089486ca20766e188cd3388eb8ef653327a78e2dc0f6e42d31be8d97b1c5a8488eb0e
>
> ==========
>
> β
Public key check succeeded π
> β
Authorizer check succeeded π‘οΈ
> Matched allow policy: allow if right("file1")
Generate a snapshot
Biscuit inspect can store the authorization context to a file, which can be inspected later. The file will contain both the token contents, and the authorizer contents.
$ biscuit inspect --raw-input biscuit-file.bc \
--public-key-file public-key-file \
--authorize-with 'allow if right("file1");' \
--include-time \
--dump-snapshot-to snapshot-file
> Open biscuit
> Authority block:
> == Datalog ==
> right("file1");
>
> == Revocation id ==
> a1675990f0b23015019a49b6b003c14fcfd2be134c9899b8146f4f702f8089486ca20766e188cd3388eb8ef653327a78e2dc0f6e42d31be8d97b1c5a8488eb0e
>
> ==========
>
> β
Public key check succeeded π
> β
Authorizer check succeeded π‘οΈ
> Matched allow policy: allow if right("file1")
Attenuate a token
# this will create a new biscuit token with the provided block appended
$ biscuit attenuate --raw-input biscuit-file.bc --block 'check if operation("read")'
> En4KFAoFZmlsZTEYAyIJCgcIBBIDGIAIEiQIABIgX9V0q_5ZU5NpVUKRF_Z8BPbLKl_9TL1bFeiqBQ97LFoaQKFnWZDwsjAVAZpJtrADwU_P0r4TTJiZuBRvT3AvgIlIbKIHZuGIzTOI6472UzJ6eOLcD25C0xvo2XscWoSI6w4afAoSGAMyDgoMCgIIGxIGCAMSAhgAEiQIABIgCxzPZaKjKJ6_C9cy39I16dgCLu9I5EqPNHwGiOl_eOMaQFU00BW0iFfxxt1pMp4vO-R26mPxx9XMKEEyx80Fugf1OFAPmTdefYVm_vp6rV02GcODrCF3C0Ua3QGopor7uAsiIgogSfbsyId59q50CqdJhxmBYXhqMYcTMYsB1eVnDNw3MTY=
# this will add a TTL check to an existing biscuit token
$ biscuit attenuate --raw-input biscuit-file.bc --add-ttl "1 day" --block ""
> En4KFAoFZmlsZTEYAyIJCgcIBBIDGIAIEiQIABIgX9V0q_5ZU5NpVUKRF_Z8BPbLKl_9TL1bFeiqBQ97LFoaQKFnWZDwsjAVAZpJtrADwU_P0r4TTJiZuBRvT3AvgIlIbKIHZuGIzTOI6472UzJ6eOLcD25C0xvo2XscWoSI6w4amQEKLwoBdBgDMigKJgoCCBsSBwgFEgMIgQgaFwoFCgMIgQgKCAoGIP7KrpMGCgQaAggAEiQIABIgU2t5XP1OA9VfujCZAZSVbBeE0WMBqMHViXwEhzoTkSAaQN1jHm8uqZVjhfO_J7URfL2NHK4_E7JJD45jvIFFgrgAmcksrhIc5qgyq1U7D0Jbo5tR7H4w3UvMN0sAEJzSjAoiIgogrolYRQ67V5SHiB7ii_YHPU5uwzDuHc1rL2WGKiAvH_c=
Seal a token
# this will prevent a biscuit from being attenuated further
$ biscuit seal --raw-input biscuit-file.bc
Inspect a snapshot
inspect-snapshot
displays the contents of a snapshot (facts, rules, checks, policies), as well as how much time has been spent evaluating datalog.
The authorization process can be resumed with --authorize-interactive
, --authorize-with
, or --authorize-with-file
.
The authorizer can be queried with --query
or --query-all
$ biscuit inspect-snapshot snapshot-file \
--authorize-with "" \
--query 'data($file) <- right($file)'
// Facts:
// origin: 0
right("file1");
// origin: authorizer
time(2023-11-17T13:59:04Z);
// Policies:
allow if right("file1");
β±οΈ Execution time: 13ΞΌs (0 iterations)
β
Authorizer check succeeded π‘οΈ
Matched allow policy: allow if right("file1")
π Running query: data($file) <- right($file)
data("file1")
C
The Rust version of Biscuit can be found on Github, crates.io and on docs.rs.
Install
You can download pre-built packages and source code releases on the Github releases page of the Biscuit Rust project.
If there is no release available for your platform, you can build one as follows:
- install Rust
- install cargo-c
- build the project:
cargo cinstall --release --prefix=/usr --destdir=./build
This will create the following files in the build/
directory:
.
βββ usr
βββ include
βΒ Β βββ biscuit_auth
βΒ Β βββ biscuit_auth.h
βββ lib
βββ libbiscuit_auth.a
βββ libbiscuit_auth.so -> libbiscuit_auth.so.2.0.0
βββ libbiscuit_auth.so.2 -> libbiscuit_auth.so.2.0.0
βββ libbiscuit_auth.so.2.0.0
βββ pkgconfig
βββ biscuit_auth.pc
Create a root key
uint8_t *seed = <generated this from a CSPRNG>;
KeyPair * root_kp = key_pair_new(seed, seed_len);
printf("key_pair creation error? %s\n", error_message());
PublicKey* root = key_pair_public(root_kp);
Create a token
BiscuitBuilder* b = biscuit_builder(root_kp);
biscuit_builder_add_authority_fact(b, "right(\"file1\", \"read\")");
Biscuit * biscuit = biscuit_builder_build(b, (const uint8_t * ) seed, seed_len);
Create an authorizer
Authorizer * authorizer = biscuit_authorizer(b2);
authorizer_add_check(authorizer, "check if right(\"efgh\")");
if(!authorizer_authorize(authorizer)) {
printf("authorizer error(code = %d): %s\n", error_kind(), error_message());
if(error_kind() == LogicFailedChecks) {
uint64_t error_count = error_check_count();
printf("failed checks (%ld):\n", error_count);
for(uint64_t i = 0; i < error_count; i++) {
if(error_check_is_authorizer(i)) {
uint64_t check_id = error_check_id(i);
const char* rule = error_check_rule(i);
printf(" Authorizer check %ld: %s\n", check_id, rule);
} else {
uint64_t check_id = error_check_id(i);
uint64_t block_id = error_check_block_id(i);
const char* rule = error_check_rule(i);
printf(" Block %ld, check %ld: %s\n", block_id, check_id, rule);
}
}
}
} else {
printf("authorizer succeeded\n");
}
Attenuate a token
BlockBuilder* bb = biscuit_create_block(biscuit);
block_builder_add_check(bb, "check if operation(\"read\")");
block_builder_add_fact(bb, "hello(\"world\")");
char *seed2 = "ijklmnopijklmnopijklmnopijklmnop";
KeyPair * kp2 = key_pair_new((const uint8_t *) seed, seed_len);
Biscuit* b2 = biscuit_append_block(biscuit, bb, kp2);
Seal a token
uint64_t size = biscuit_serialized_size(biscuit);
printf("serialized size: %ld\n", size);
uint8_t * buffer = malloc(size);
uint64_t written = biscuit_serialize(biscuit, buffer);
Reject revoked tokens
TODO
Query data from the authorizer
TODO
Go
The Go version of Biscuit can be found on Github.
Install
In go.mod
:
require(
github.com/biscuit-auth/biscuit-go v2.2.0
)
Create a root key
func CreateKey() (ed25519.PublicKey, ed25519.PrivateKey) {
rng := rand.Reader
publicRoot, privateRoot, _ := ed25519.GenerateKey(rng)
return publicRoot, privateRoot
}
Create and serialize a token
rng := rand.Reader
publicRoot, privateRoot, _ := ed25519.GenerateKey(rng)
authority, err := parser.FromStringBlockWithParams(`
right("/a/file1.txt", {read});
right("/a/file1.txt", {write});
right("/a/file2.txt", {read});
right("/a/file3.txt", {write});
`, map[string]biscuit.Term{"read": biscuit.String("read"), "write": biscuit.String("write")})
if err != nil {
panic(fmt.Errorf("failed to parse authority block: %v", err))
}
builder := biscuit.NewBuilder(privateRoot)
builder.AddBlock(authority)
b, err := builder.Build()
if err != nil {
panic(fmt.Errorf("failed to build biscuit: %v", err))
}
token, err := b.Serialize()
if err != nil {
panic(fmt.Errorf("failed to serialize biscuit: %v", err))
}
// token is now a []byte, ready to be shared
// The biscuit spec mandates the use of URL-safe base64 encoding for textual representation:
fmt.Println(base64.URLEncoding.EncodeToString(token))
Parse and authorize a token
b, err := biscuit.Unmarshal(token)
if err != nil {
panic(fmt.Errorf("failed to deserialize token: %v", err))
}
authorizer, err := b.Authorizer(publicRoot)
if err != nil {
panic(fmt.Errorf("failed to verify token and create authorizer: %v", err))
}
authorizerContents, err := parser.FromStringAuthorizerWithParams(`
resource({res});
operation({op});
allow if right({res}, {op});
`, map[string]biscuit.Term{"res": biscuit.String("/a/file1.txt"), "op": biscuit.String("read")})
if err != nil {
panic(fmt.Errorf("failed to parse authorizer: %v", err))
}
authorizer.AddAuthorizer(authorizerContents)
if err := authorizer.Authorize(); err != nil {
fmt.Printf("failed authorizing token: %v\n", err)
} else {
fmt.Println("success authorizing token")
}```
## Attenuate a token
```go
b, err = biscuit.Unmarshal(token)
if err != nil {
panic(fmt.Errorf("failed to deserialize biscuit: %v", err))
}
// Attenuate the biscuit by appending a new block to it
blockBuilder := b.CreateBlock()
block, err := parser.FromStringBlockWithParams(`
check if resource($file), operation($permission), [{read}].contains($permission);`,
map[string]biscuit.Term{"read": biscuit.String("read")})
if err != nil {
panic(fmt.Errorf("failed to parse block: %v", err))
}
blockBuilder.AddBlock(block)
attenuatedBiscuit, err := b.Append(rng, blockBuilder.Build())
if err != nil {
panic(fmt.Errorf("failed to append: %v", err))
}
// attenuatedToken is a []byte, representing an attenuated token
attenuatedToken, err := b.Serialize()
if err != nil {
panic(fmt.Errorf("failed to serialize biscuit: %v", err))
}
Reject revoked tokens
The Biscuit::RevocationIds
method returns the list of revocation identifiers as byte arrays.
identifiers := token.RevocationIds();
Query data from the authorizer
The Authorizer::Query
method takes a rule as argument and extract the data from generated facts as tuples.
func Query(authorizer biscuit.Authorizer) (biscuit.FactSet, error) {
rule, err := parser.FromStringRule(`data($name, $id) <- user($name, $id`)
if err != nil {
return nil, fmt.Errorf("failed to parse check: %v", err)
}
return authorizer.Query(rule)
}
Haskell
Biscuit tokens can be used in haskell through biscuit-haskell
.
Install
In the cabal file:
biscuit-haskell ^>= 0.3.0
Create a key pair
import Auth.Biscuit
main :: IO ()
main = do
secretKey <- newSecret
let publicKey = toPublic secretKey
-- will print the hex-encoded secret key
print $ serializeSecretKeyHex secretKey
-- will print the hex-encoded public key
print $ serializePublicKey publicKey
Create a token
{-# LANGUAGE QuasiQuotes #-}
import Auth.Biscuit
myBiscuit :: SecretKey -> IO (Biscuit Open Verified)
myBiscuit secretKey =
-- datalog blocks are declared inline and are parsed
-- at compile time
mkBiscuit secretKey [block|
user("1234");
check if operation("read");
|]
Authorize a token
{-# LANGUAGE QuasiQuotes #-}
import Auth.Biscuit
import Data.Time (getCurrentTime)
myCheck :: Biscuit p Verified -> IO Bool
myCheck b = do
now <- getCurrentTime
-- datalog blocks can reference haskell variables with the
-- special `{}` syntax. This allows dynamic datalog generation
-- without string concatenation
result <- authorizeBiscuit b [authorizer|
time({now});
operation("read");
allow if true;
|]
case result of
Left a -> pure False
Right _ -> pure True
Attenuate a token
{-# LANGUAGE QuasiQuotes #-}
import Auth.Biscuit
import Data.Time (UTCTime)
-- only `Open` biscuits can be attenuated
addTTL :: UTCTime -> Biscuit Open c -> IO (Biscuit Open c)
addTTL ttl b =
addBlock [block|check if time($time), $time < {ttl}; |] b
Seal a token
import Auth.Biscuit
-- `Open` biscuits can be sealed. The resulting biscuit
-- can't be attenuated further
sealBiscuit :: Biscuit Open c -> Biscuit Sealed c
sealBiscuit b = seal b
Reject revoked tokens
Revoked tokens can be rejected directly during parsing:
import Auth.Biscuit
parseBiscuit :: IO Bool
parseBiscuit = do
let parsingOptions = ParserConfig
{ encoding = UrlBase64
, getPublicKey = \_ -> myPublicKey
-- ^ biscuits carry a key identifier, allowing you to choose the
-- public key used for signature verification. Here we ignore
-- it, to always use the same public key
, isRevoked = fromRevocationList revokedIds
-- ^ `fromRevocationList` takes a list of revoked ids, but
-- the library makes it possible to run an effectful check instead
-- if you don't have a static revocation list
}
result <- parseWith parsingOptions encodedBiscuit
case result of
Left _ -> False
Right _ -> True
Query data from the authorizer
The values that made the authorizer succeed are kept around in the
authorization success, and can be queried directly with getBindings
.
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE QuasiQuotes #-}
import Auth.Biscuit
checkBiscuit :: Biscuit -> IO Text
checkBiscuit b =
result <- authorizeBiscuit b [authorizer| allow if user($user); |]
case result of
Left a -> throwError β¦
Right AuthorizedBiscuit{authorizationSuccess} ->
case getSingleVariableValue (getBindings authorizationSuccess) "user" of
Just userId -> pure userId
-- ^ this will only match if a unique user id is
-- retrieved from the matched variables
Nothing -> throwError β¦
You can also provide custom queries that will be run against all the
generated facts. By default, only facts from the authority block
and the authorizer are queried. Block facts can be queried either
by appending trusting previous
to the query (be careful, this will
return facts coming from untrusted sources), or by appending
trusting {publicKey}
, to return facts coming from blocks signed by
the specified key pair.
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE QuasiQuotes #-}
import Auth.Biscuit
checkBiscuit :: Biscuit -> IO Text
checkBiscuit b =
result <- authorizeBiscuit b [authorizer| allow if true; |]
case result of
Left a -> throwError β¦
Right success ->
case getSingleVariableValue (queryAuthorizerFacts success [query|user($user)|]) "user" of
Just userId -> pure userId
-- ^ this will only match if a unique user id is
-- retrieved from the matched variables
Nothing -> throwError β¦
Java
The Java version of Biscuit can be found on Github, and maven.
Install
In pom.xml
:
<dependency>
<groupId>org.biscuitsec</groupId>
<artifactId>biscuit</artifactId>
<version>2.3.1</version>
<type>jar</type>
</dependency>
Create a root key
public KeyPair root() {
return new KeyPair();
}
Create a token
public Biscuit createToken(KeyPair root) throws Error {
return Biscuit.builder(root)
.add_authority_fact("user(\"1234\")")
.add_authority_check("check if operation(\"read\")")
.build();
}
Create an authorizer
public Tuple2<Long, WorldAuthorized> authorize(KeyPair root, byte[] serializedToken) throws NoSuchAlgorithmException, SignatureException, InvalidKeyException, Error {
return Biscuit.from_bytes(serializedToken, root.public_key()).authorizer()
.add_fact("resource(\"/folder1/file1\")")
.add_fact("operation(\"read\")")
.allow()
.authorize();
}
Attenuate a token
public Biscuit attenuate(KeyPair root, byte[] serializedToken) throws NoSuchAlgorithmException, SignatureException, InvalidKeyException, Error {
Biscuit token = Biscuit.from_bytes(serializedToken, root.public_key());
Block block = token.create_block().add_check("check if operation(\"read\")");
return token.attenuate(block);
}
Seal a token
Either<Error, byte[]> sealed_token = token.seal();
Reject revoked tokens
The revocation_identifiers
method returns the list of revocation identifiers as byte arrays.
List<RevocationIdentifier> revocation_ids = token.revocation_identifiers();
Query data from the authorizer
The query
method takes a rule as argument and extract the data from generated facts as tuples.
public Set<Fact> query(Authorizer authorizer) throws Error.Timeout, Error.TooManyFacts, Error.TooManyIterations, Error.Parser {
return authorizer.queryAll("data($name, $id) <- user($name, $id)");
}
NodeJS
The NodeJS version of Biscuit can be found on Github, and on NPM. It wraps the Biscuit Rust library in WebAssembly, and it provides both CommonJS and EcmaScript module interfaces.
β οΈ support for WebAssembly modules in NodeJS is disabled by default and needs to be explicitly enabled with a command-line flag: node --experimental-wasm-modules index.js
.
The methods that can fail (like Authorizer.authorize()
) will throw an exception, containing a
copy of the Rust library error
deserialized from JSON.
Install
In package.json
:
{
"dependencies": {
"@biscuit-auth/biscuit-wasm": "0.4.0"
}
}
β οΈ Due to some WASM-side dependencies, NodeJS versions before v19 require the following:
import { webcrypto } from 'node:crypto';
globalThis.crypto = webcrypto;
Create a root key
const { KeyPair } = require('@biscuit-auth/biscuit-wasm');
const root = new KeyPair();
Create a token
const { biscuit, PrivateKey } = require('@biscuit-auth/biscuit-wasm');
const userId = "1234";
// a token can be created from a datalog snippet
const builder = biscuit`
user(${userId});
check if resource("file1");
`;
// facts, checks and rules can be added one by one on an existing builder.
for (let right of ["read", "write"]) {
builder.addFact(fact`right(${right})`);
}
const privateKey = PrivateKey.fromString("<private key>");
const token = builder.build(privateKey);
console.log(token.toBase64());
Authorize a token
const { authorizer, Biscuit, PublicKey } = require('@biscuit-auth/biscuit-wasm');
const publicKey = PublicKey.fromString("<public key>");
const token = Biscuit.fromBase64("<base64 string>", publicKey);
const userId = "1234";
const auth = authorizer`
resource("file1");
operation("read");
allow if user(${userId}), right("read");
`;
auth.addToken(token);
// returns the index of the matched policy. Here there is only one policy,
// so the value will be `0`
const acceptedPolicy = authorizer.authorize();
// the authorization process is restricted to protect from DoS attacks. The restrictions can be configured
const acceptedPolicyCustomLimits = authorizer.authorizeWithLimits({
max_facts: 100, // default: 1000
max_iterations: 10, // default: 100
max_time_micro: 100000 // default: 1000 (1ms)
});
Attenuate a token
const { block, Biscuit, PublicKey } = require('@biscuit-auth/biscuit-wasm');
const publicKey = PublicKey.fromString("<public key>");
const token = Biscuit.fromBase64("<base64 string>", publicKey);
// restrict to read only
const attenuatedToken = token.append(block`check if operation("read")`);
console.log(attenuatedToken.toBase64());
Seal a token
A sealed token cannot be attenuated further.
const { Biscuit, PublicKey } = require('@biscuit-auth/biscuit-wasm');
const publicKey = PublicKey.fromString("<public key>");
const token = Biscuit.fromBase64("<base64 string>", publicKey);
const sealedToken = token.sealToken();
Reject revoked tokens
const { Biscuit, PublicKey } = require('@biscuit-auth/biscuit-wasm');
const publicKey = PublicKey.fromString("<public key>");
const token = Biscuit.fromBase64("<base64 string>", publicKey);
// revocationIds is a list of hex-encoded revocation identifiers,
// one per block
const revocationIds = token.getRevocationIdentifiers();
if (containsRevokedIds(revocationIds)) {
// trigger an error
}
Query data from the authorizer
const { authorizer, rule, Biscuit, PublicKey } = require('@biscuit-auth/biscuit-wasm');
const publicKey = PublicKey.fromString("<public key>");
const token = Biscuit.fromBase64("<base64 string>", publicKey);
const userId = "1234";
const auth = authorizer`
resource("file1");
operation("read");
allow if user(${userId}), right("read");
`;
auth.addToken(token);
// returns the index of the matched policy. Here there is only one policy,
// so the value will be `0`
const acceptedPolicy = auth.authorize();
const results = auth.query(rule`u($id) <- user($id)`);
console.log(results.map(fact => fact.toString()));
Using biscuit with express
Express is a popular web framework for NodeJS. biscuit-wasm
provides support for express through a dedicated middleware.
Here is a minimal example of an application exposing a single /protected/:dog
endpoint, and requiring a token with a corresponding right()
fact.
Calling middleware
with an options object provides a middleware builder, which takes either an authorizer or a function building an authorizer from a request, and returns an actual middleware. This middleware generates an authorizer from the options and the builder, runs the authorization process and either aborts the request if authorization fails or passes control over to the endpoint handler if authorization succeeds.
const express = require('express');
const { authorizer, middleware, Biscuit, PublicKey } = require('@biscuit-auth/biscuit-wasm');
const app = express();
const port = 3000;
const p = middleware({
publicKey: PublicKey.fromString("<public key>"),
fallbackAuthorizer: req => authorizer`time(${new Date()});`
});
app.get(
"/protected/:dog",
p((req) => authorizer`resource(${req.params.dog});
action("read");
allow if right(${req.params.dog}, "read");`),
(req, res) => {
// results of the authorization process are added to the `req` object
const {token, authorizer, result} = req.biscuit;
res.send("Hello!");
}
)
Middleware configuration
The middleware takes an options object. All its fields are optional except publicKey
:
publicKey
: the public key used to verify token signatures;priorityAuthorizer
: either an authorizer or a function building an authorizer from a request. Policies from the priority authorizer are matched before the endpoint policies and the fallback authorizer policies;fallbackAuthorizer
: either an authorizer or a function building an authorizer from a request. Policies from the fallback authorizer are matched after the priority authorizer policies and the endpoint policies;tokenExtractor
: a function extracting the token string from a request. The default extractor expects the request to carry an authorization header with theBearer
auth scheme (ie anAuthorization:
header starting withBearer
and then the biscuit token);tokenParser
: a function parsing and verifying the token. By default it parses the token from a URL-safe base64 string.onError
: an error handler. By default, it prints the error to stderr and returns an HTTP error (401 if the token is missing, 403 if it cannot be parsed, verified or authorized)
Python
Python bindings for Biscuit are distributed on PyPI. They wrap the Biscuit Rust library, and provide a pythonic API.
Detailed documentation is available at https://biscuit-python.netlify.app.
Rust
The Rust version of Biscuit can be found on Github, crates.io and on docs.rs.
Install
In Cargo.toml
:
biscuit-auth = "3.1"
Create a root key
use biscuit_auth::KeyPair;
let root_keypair = KeyPair::new();
Create a token
use biscuit_auth::{error, macros::*, Biscuit, KeyPair};
fn create_token(root: &KeyPair) -> Result<Biscuit, error::Token> {
let user_id = "1234";
// the authority block can be built from a datalog snippet
// the snippet is parsed at compile-time, efficient building
// code is generated
let mut authority = biscuit!(
r#"
// parameters can directly reference in-scope variables
user({user_id});
// parameters can be manually supplied as well
right({user_id}, "file1", {operation});
"#,
operation = "read",
);
// it is possible to modify a builder by adding a datalog snippet
biscuit_merge!(
&mut authority,
r#"check if operation("read");"#
);
authority.build(&root)
}
Create an authorizer
use biscuit_auth::{builder_ext::AuthorizerExt, error, macros::*, Biscuit};
fn authorize(token: &Biscuit) -> Result<(), error::Token> {
let operation = "read";
// same as the `biscuit!` macro. There is also a `authorizer_merge!`
// macro for dynamic authorizer construction
let mut authorizer = authorizer!(
r#"operation({operation});"#
);
// register a fact containing the current time for TTL checks
authorizer.set_time();
// add a `allow if true;` policy
// meaning that we are relying entirely on checks carried in the token itself
authorizer.add_allow_all();
// link the token to the authorizer
authorizer.add_token(token)?;
let result = authorizer.authorize();
// store the authorization context
println!("{}", authorizer.to_base64_snapshot()?);
let _ = result?;
Ok(())
}
Restore an authorizer from a snasphot
use biscuit_auth::Authorizer;
fn display(snapshot: &str) {
let authorizer = Authorizer::from_base64_snapshot(snapshot).unwrap();
println!("{authorizer}");
}
Attenuate a token
use biscuit_auth::{builder_ext::BuilderExt, error, macros::*, Biscuit};
use std::time::{Duration, SystemTime};
fn attenuate(token: &Biscuit) -> Result<Biscuit, error::Token> {
let res = "file1";
// same as `biscuit!` and `authorizer!`, a `block_merge!` macro is available
let mut builder = block!(r#"check if resource({res});"#);
builder.check_expiration_date(SystemTime::now() + Duration::from_secs(60));
token.append(builder)
}
Seal a token
let sealed_token = token.seal()?;
Reject revoked tokens
The Biscuit::revocation_identifiers
method returns the list of revocation identifiers as byte arrays.
Don't forget to parse them from a textual representation (for instance
hexadecimal) if you store them as text values.
let identifiers: Vec<Vec<u8>> = token.revocation_identifiers();
Query data from the authorizer
The Authorizer::query
method takes a rule as argument and extract the data from generated facts as tuples.
let res: Vec<(String, i64)> =
authorizer.query("data($name, $id) <- user($name, $id)").unwrap();
Web components
In addition to providing libraries for several languages, biscuit comes equipped with a series of web components. With these components, you can generate, inspect and attenuate tokens within a web page, or input and evaluate datalog. This can come in handy when documenting your use of biscuits.
Those components can be used directly on the tooling page of biscuitsec.org.
Installation
The web components are distributed through npm and can be bundled along with your frontend code.
β οΈ The components rely on web assembly resources that need to be served under /assets
.
Here is an example of a rollup configuration that will generate a bundle under the dist
folder.
package.json
{
"name": "wc",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"build": "rollup -c"
},
"author": "",
"license": "ISC",
"dependencies": {
"@biscuit-auth/web-components": "0.5.0"
},
"devDependencies": {
"@rollup/plugin-commonjs": "^21.0.1",
"@rollup/plugin-node-resolve": "^13.0.6",
"@web/rollup-plugin-import-meta-assets": "^1.0.7",
"rollup": "^2.60.0",
"rollup-plugin-copy": "^3.4.0"
}
}
rollup.config.js
import nodeResolve from '@rollup/plugin-node-resolve';
import commonjs from '@rollup/plugin-commonjs';
import copy from 'rollup-plugin-copy';
import { importMetaAssets } from '@web/rollup-plugin-import-meta-assets';
const sourceDir = 'src';
const outputDir = 'dist';
export default {
input: 'index.js',
output: {
dir: 'dist/',
format: 'esm'
},
plugins: [
nodeResolve({ browser: true }),
commonjs({
include: 'node_modules/**'
}),
copy({
targets: [
{ src: "node_modules/@biscuit-auth/web-components/dist/assets/*", dest: "dist/assets" }
],
}),
importMetaAssets()
]
};
index.html
β¦
<head>
β¦
<script type="module" src="/index.js"></script>
β¦
</head>
β¦
Usage
Token printer
This components allows you to interact with serialized tokens:
- inspection
- verification and authorization
- attenuation
When used without any attributes, it will provide an empty text input, where you can paste a base64-encoded token to inspect its contents.
<bc-token-printer></bc-token-printer>
The following (optional) attributes are available:
biscuit
: a base64-encoded biscuit that will be displayed as if it was pasted in the textarea;rootPublicKey
: a hex-encoded public key used to verify the biscuit signature;readonly
: when set to"true"
, will prevent changing the input values. It is meant to be used along with thebiscuit
attribute;showAuthorizer
: when set to"true"
, will display a text input for datalog code, used to authorize the token (along with an input for a public key, to verify the token signatures);showAttenuation
: when set to"true"
, will display inputs for appending blocks to the token.
Additionally, authorizer code can be provided through a child element carrying
the authorizer
class.
<bc-token-printer>
<pre><code class="authorizer">
allow if true;
</code></pre>
</bc-token-printer>
allow if true;
Token generator
This component allows you to generate a token from datalog code and a root private key.
When used without any attributes, it will provide an empty text input, where you can type in datalog code, and a private key input used to sign the token.
The private key input lets you paste an existing key or generate a random one. It will also display the corresponding public key.
<bc-token-generator></bc-token-generator>
The following (optional) attributes are available:
privateKey
: an hex-encoded private key used to sign the token. Only use this for examples and never put an actual private key here.
Additionally, token blocks can be provided through children elements carrying
the block
class. Attenuation blocks can carry an optional privateKey
attribute, which will be used to sign the block.
<bc-token-generator>
<pre><code class="block">
// authority block
user("1234");
</code></pre>
<pre><code class="block" privateKey="ca54b85182980232415914f508e743ee13da8024ebb12512bb517d151f4a5029">
// attenuation block
check if time($time), $time < 2023-05-04T00:00:00Z;
</code></pre>
</bc-token-generator>
// authority block
user("1234");
// attenuation block
check if time($time), $time < 2023-05-04T00:00:00Z;
Snapshot printer
This component allows you to inspect the contents of a snapshot, optionally adding extra authorization code or queries.
<bc-snapshot-printer snapshot="CgkI6AcQZBjAhD0Q72YaZAgEEgVmaWxlMSINEAMaCQoHCAQSAxiACCoQEAMaDAoKCAUSBiCo492qBjIRCg0KAggbEgcIBBIDGIAIEAA6EgoCCgASDAoKCAUSBiCo492qBjoPCgIQABIJCgcIBBIDGIAIQAA=" showAuthorizer="true" showQuery="true">
</bc-snapshot-printer>
Datalog playground
The datalog playground allows you to type in and evaluate datalog code without providing a token. It displays the evaluation results, as well as all the facts generated during evaluation.
When used without any attributes, it displays a single text input, for authorizer policies.
<bc-datalog-playground></bc-datalog-playground>
The following (optional) attributes are available:
showBlocks
: when set to"true"
, allows to add inputs for token blocks.
Additionally, authorizer code and token blocks can be provided through children
elements carrying the authorizer
or block
class. Attenuation blocks can
carry an optional privateKey
attribute, which will be used to sign the block.
<bc-datalog-playground showBlocks="true">
<pre><code class="block">
// authority block
user("1234");
</code></pre>
<pre><code class="block" privateKey="ca54b85182980232415914f508e743ee13da8024ebb12512bb517d151f4a5029">
// attenuation block
check if time($time), $time < 2023-05-04T00:00:00Z;
thirdParty(true);
</code></pre>
<pre><code class="authorizer">
// authorizer policies
time(2023-05-03T00:00:00Z);
allow if user($u);
check if thirdParty(true) trusting ed25519/1f76d2bdd5e8dc2c1dc1142d85d626b19caf8c793f4aae3ff8d0fd6bf9c038b7;
</code></pre>
</bc-datalog-playground>
// authority block
user("1234");
// attenuation block
check if time($time), $time < 2023-05-04T00:00:00Z;
thirdParty(true);
// authorizer policies
time(2023-05-03T00:00:00Z);
allow if user($u);
check if thirdParty(true) trusting ed25519/1f76d2bdd5e8dc2c1dc1142d85d626b19caf8c793f4aae3ff8d0fd6bf9c038b7;
Recipes
Common patterns
As a specification, biscuit does not mandate specific ways to use datalog. As far as authorization logic is concerned, there are no built-in facts with specific behaviour. That being said, some patterns are common and while not part of the spec, are codified in libraries and tools. Finally, using specific fact names can help with reducing token size.
Expiration check
The CLI and the rust library (among others) use the time()
fact to represent the instant in time where the token is used.
This provides a way to encode expiration dates in tokens:
check if time($time), $time <= 2022-03-30T20:00:00Z;
Expiration checks require the authorizer and tokens to use the same fact name (here, time()
). It would work with other fact names,
but the existing tooling provides helpers using time()
, so it is better to be consistent with it. Additionally, time()
is part of the default symbol table, so using it will result in smaller tokens.
Interactive example
check if time($time), $time <= 2022-03-30T20:00:00Z;
// the authorizer can provide a fact containing the current time
time(2022-03-30T19:00:00Z);
allow if true;
Attenuation can add more expiration checks, and all of them will be tested.
Interactive example
check if time($time), $time <= 2022-03-30T20:00:00Z;
check if time($time), $time <= 2022-03-30T18:30:00Z;
// the authorizer can provide a fact containing the current time
time(2022-03-30T19:00:00Z);
allow if true;
Capabilities
The right()
fact is commonly used to describe access rights. Depending on the context, it can be used with several values:
right("read"); // read-only access to everything for the token holder
right("resource1", "read") // read-only access to resource1 for the token holder
right("user1", "resource1", "read") // read-only access to resource1 for user1
Usually, a right()
fact carried in a token will not mention a user id and will refer to the token holder. right()
facts
defined server-side (such as in an access rights matrix) will mention an identifier. Tokens carrying a user identifier
usually do so with the user()
fact.
Default symbols
In order to reduce the size of tokens, the biscuit specification defines a list of strings that can be used in tokens without having to be serialized. Using these common symbols thus costs very little and won't increase the size of the token. It is thus then good practice to use those strings as fact names or terms, as long as they make sense.
- read
- write
- resource
- operation
- right
- time
- role
- owner
- tenant
- namespace
- user
- team
- service
- admin
- group
- member
- ip_address
- client
- client_ip
- domain
- path
- version
- cluster
- node
- hostname
- nonce
- query
Interoperability & Reusability
In small biscuit deployments (a couple of services, in a single organization), you have full control on which rules and facts are defined and have meaning. On bigger deployments (across multiple organizations, or if you want to design a reusable library that can be used by multiple services), you will need to be more careful about avoiding name collisions.
While there are no well-defined patterns that have emerged yet,
a good practice is to prefix fact names with the organization name,
separated by a colon (:
). So for instance:
// can collide with other facts
user("1234");
// makes it clear that the user is tied to a specific organization
wayne_industries:user("1234");
A few notes
Using namespaced fact names will make tokens a bit bigger for two reasons:
- well, they're longer;
- names like
user
that are part of the default symbol table are only represented by an index in the wire format.
The size increase will be mitigated by string interning (you only pay the extra cost once).
Another thing to note is that namespacing is not a security feature. It prevents accidental name collisions, but is not a proper way to separate facts based on their origin. Third party blocks provide such a mechanism. Namespacing can be used in conjuction, to make things easier to read and understand.
Role based access control
Role-based access control is a common authorization model where a set of permissions is assigned to a role, and a user or program can have one or more roles. This makes permissions more manageable than giving them to users directly: a role can be designed for a set of tasks, and can be given or taken back from the user depending on their duties, in one operation, instead of reviewing the user's entire set of rights. Changing the permissions of a role can also be done without going through all the users.
Example
Let us imagine a space-faring package delivery company. Each member of the company has specific duties, represented by roles, that can perform specific actions.
// let's define roles and associated permissions for a package delivery company
role("admin", ["billing:read", "billing:write", "address:read", "address:write"] );
// accountants can check the billing info and the address for invoicing
role("accounting", ["billing:read", "billing:write", "address:read"]);
// support people can update delivery info
role("support", ["address:read", "address:write"]);
// the pilot can drive and learn the delivery address
role("pilot", ["spaceship:drive", "address:read"]);
// delivery people can learn the address and handle the package
role("delivery", ["address:read", "package:load", "package:unload", "package:deliver"]);
// associate users to roles
// this would represent a database table holding both user data and roles,
// but similar facts could be derived from a join table between User and Role tables
user_roles(0, "Professor Farnsworth", ["admin"]);
user_roles(1, "Hermes Conrad", ["accounting"]);
user_roles(2, "Amy Wong", ["support"]);
user_roles(3, "Leela", ["pilot", "delivery"]);
user_roles(4, "Fry", ["delivery"]);
We want to check if an operation is authorized, depending on the user requesting it. Typically, the user id would be carried in a fact likeuser(0)
, in the first block of a Biscuit token. Each employee gets issued their own token.
From that user id, we would look up in the database the user's roles, and for each role the authorized operations, and load that as facts. We can then check that we have the rights to perform the operation:
role("admin", ["billing:read", "billing:write", "address:read", "address:write"] );
role("accounting", ["billing:read", "billing:write", "address:read"]);
role("support", ["address:read", "address:write"]);
role("pilot", ["spaceship:drive", "address:read"]);
role("delivery", ["address:read", "package:load", "package:unload", "package:deliver"]);
user_roles(0, "Professor Farnsworth", ["admin"]);
user_roles(1, "Hermes Conrad", ["accounting"]);
user_roles(2, "Amy Wong", ["support"]);
user_roles(3, "Leela", ["pilot", "delivery"]);
user_roles(4, "Fry", ["delivery"]);
// we got this from a cookie or Authorization header
user(1);
// we know from the request which kind of operation we want
operation("billing:write");
// we materialize the rights
right($id, $principal, $operation) <-
user($id),
operation($operation),
user_roles($id, $principal, $roles),
role($role, $permissions),
$roles.contains($role),
$permissions.contains($operation);
allow if
operation($op),
right($id, $principal, $op);
deny if true;
Why are we loading data from the database and checking the rights here, while we could do all of that as part of a SQL query? After all, Datalog is doing similar work, joining facts like we would join tables.
We actually need to use both: a SQL query to load only the data we need, because requesting all the users and roles on every request would quickly overload the database. And we load them in Datalog because we can encode more complex rules with multiple nested joins and more specific patterns. Example: we could get an attenuated token that only delegates rights from a particular role of the original user.
Another question: why are we creating the right()
facts, instead of using the body of that rule directly in the allow policy?
Verifying inside the policy would work, but we would not get another benefit of Datalog here: we can use it to explore data. Try adding more user()
facts and see which rights are generated. Try to add rules to answer specific questions.
Example: write a rule to get the list of employees that are authorized to deliver a package.
Answer
can_deliver($name) <-
role($role, $permissions),
$permissions.contains("package:deliver"),
user_roles($id, $name, $roles),
$roles.contains($role);
Resource specific roles
We only adressed authorization per operations, but often roles are also linked to a resource, like an organization in a SaaS application, a team or project in a project management software. Users can get different roles depending on the resource they access, and they can also get global roles.
We have high priority packages that need special handling, so not everybody can deliver them. We will create different roles for normal and high priority packages. There are multiple ways this can be done, depending on your API and data model. You could have a generic role or role assignment with a "resource type" field, like this:
user_roles(3, "Leela", "high priority", ["pilot", "delivery"]);
user_roles(3, "Leela", "low priority", ["pilot", "delivery"]);
user_roles(4, "Fry", "low priority", ["delivery"]);
Or we could have roles defined per resource, and users are assigned those roles:
role("low priority", "pilot", ["spaceship:drive", "address:read"]);
role("high priority", "pilot", ["spaceship:drive", "address:read"]);
user_roles(3, "Leela", "low priority", ["pilot", "delivery"]);
user_roles(3, "Leela", "high priority", ["pilot", "delivery"]);
Or even different types of roles:
// using a numeric id as foreign key in users
role_high_priority("pilot", ["spaceship:drive", "address:read"]);
role_low_priority("pilot", ["spaceship:drive", "address:read"]);
// we need user_role or something else
user_high_priority(3, "Leela", ["pilot", "delivery"]);
user_low_priority(3, "Leela", ["pilot", "delivery"]);
Let's use the second version, and see how data is fetched from the database:
// we got this from a cookie or Authorization header
user(3);
// we know from the request which kind of operation we want
operation("address:read");
// we know from the request we want to read the address of a high priority package
resource("high priority");
// user roles loaded from the database with the user id and resource
user_roles(3, "Leela", "high priority", ["pilot", "delivery"]);
// roles loaded from the ressource and the list from user_roles
role("high priority", "pilot", ["spaceship:drive", "address:read"]);
role("high priority", "delivery", ["address:read", "package:load", "package:unload", "package:deliver"]);
// we materialize the rights
right($id, $principal, $operation, $priority) <-
user($id),
operation($operation),
resource($priority),
user_roles($id, $principal, $priority, $roles),
role($priority, $role, $permissions),
$roles.contains($role),
$permissions.contains($operation);
You can explore the full example here:
role("low priority", "admin", ["billing:read", "billing:write", "address:read", "address:write"] );
role("low priority","accounting", ["billing:read", "billing:write", "address:read"]);
role("low priority","support", ["address:read", "address:write"]);
role("low priority", "pilot", ["spaceship:drive", "address:read"]);
role("low priority", "delivery", ["address:read", "package:load", "package:unload", "package:deliver"]);
role("high priority", "admin", ["billing:read", "billing:write", "address:read", "address:write"] );
role("high priority", "pilot", ["spaceship:drive", "address:read"]);
role("high priority", "delivery", ["address:read", "package:load", "package:unload", "package:deliver"]);
user_roles(0, "Professor Farnsworth", "low priority", ["admin"]);
user_roles(1, "Hermes Conrad", "low priority", ["accounting"]);
user_roles(2, "Amy Wong", "low priority", ["support"]);
user_roles(3, "Leela", "low priority", ["pilot", "delivery"]);
user_roles(4, "Fry", "low priority", ["delivery"]);
user_roles(0, "Professor Farnsworth", "high priority", ["admin"]);
user_roles(3, "Leela", "high priority", ["pilot", "delivery"]);
// we got this from a cookie or Authorization header
user(3);
// we know from the request which kind of operation we want
operation("address:read");
// we know from the request we want to read the address of a high priority package
resource("high priority");
// we materialize the rights
right($id, $principal, $operation, $priority) <-
user($id),
operation($operation),
resource($priority),
user_roles($id, $principal, $priority, $roles),
role($priority, $role, $permissions),
$roles.contains($role),
$permissions.contains($operation);
allow if
operation($op),
resource($priority),
right($id, $principal, $op, $priority);
deny if true;
Attenuation
Roles work great when the user structure is well defined and does not change much, but they grow in complexity as we support more use cases, temporary access, transversal roles, interns, contractors, audits...
Attenuation in Biscuit provides a good escape hatch to avoid that complexity. As an example, let's assume that, for pressing reasons, Leela has to let Bender deliver the package (usually we do not trust Bender). Do we add a new role just for him? Does Leela need to contact headquarters to create it and issue a new token for Bender, in the middle of traveling?
Leela can instead take her own token, attenuate it to allow the delivery of high priority packages for a limited time. She can even seal the token to avoid other attenuations. We would end up with the following:
// we got this from the first block of the token
user(3);
// the token is attenuated with a new block containing those checks
check if
resource("high priority"),
operation($op),
role("high priority", "delivery", $permissions),
$permissions.contains($op);
check if
time($date),
$date < 3000-01-31T12:00:00.00Z;
// data from the request
operation("address:read");
resource("high priority");
// provided by the authorizer
time(3000-01-31T11:00:00.00Z);
// user roles loaded from the user id in the first block
user_roles(3, "Leela", "high priority", ["pilot", "delivery"]);
// roles loaded from the ressource and the list from user_roles
role("high priority", "pilot", ["spaceship:drive", "address:read"]);
role("high priority", "delivery", ["address:read", "package:load", "package:unload", "package:deliver"]);
// we materialize the rights
right($id, $principal, $operation, $priority) <-
user($id),
operation($operation),
resource($priority),
user_roles($id, $principal, $priority, $roles),
role($priority, $role, $permissions),
$roles.contains($role),
$permissions.contains($operation);
allow if
operation($op),
resource($priority),
right($id, $principal, $op, $priority);
deny if true
Attenuating a token does not increase rights: if suddenly Leela loses the delivery role, the check of the attenuated token could succeed but authorization would fail both for Leela and Bender because the right
fact would not be generated.
Per request attenuation
In an API with authorization, the client would typically hold a long lived token with large rights. But when executing a single request, we can attenuate the token so that it is only usable for that specific request. Then if the request's token gets stolen, it will limit its impact.
Let's use a basic token containing a user id, that would have access to everything owned by that user, and do a GET HTTP request on "/articles/1":
user(1234);
// the authorizer provides the current date, the resource being accessed and the operation being performed
time(2022-03-30T19:00:00Z);
resource("/articles/1");
operation("read");
// the authorizer provides a series of rights for the given user
right(1234, "/articles/1", "read");
right(1234, "/articles/1", "write");
right(1234, "/articles/2", "read");
right(1234, "/articles/2", "write");
// the request is allowed if the user has sufficient rights for the current operation
allow if user($user), right($user, "/articles/1", "write");
Instead we can make a token that would only be valid for that request, with a short expiration date:
user(1234);
check if time($date), $date <= 2022-03-30T19:00:10Z;
check if operation("read");
check if resource("/articles/1");
// the authorizer provides the current date, the resource being accessed and the operation being performed
time(2022-03-30T19:00:00Z);
resource("/articles/1");
operation("read");
// the authorizer provides a series of rights for the given user
right(1234, "/articles/1", "read");
right(1234, "/articles/1", "write");
right(1234, "/articles/2", "read");
right(1234, "/articles/2", "write");
// the request is allowed if the user has sufficient rights for the current operation
allow if user($user), right($user, "/articles/1", "write");
So if we tried to use it on another endpoint, it would fail:
user(1234);
check if time($date), $date <= 2022-03-30T19:00:10Z;
check if operation("read");
check if resource("/articles/1");
// the authorizer provides the current date, the resource being accessed and the operation being performed
time(2022-03-30T19:00:00Z);
resource("/articles/1/comments");
operation("write");
// the authorizer provides a series of rights for the given user
right(1234, "/articles/1", "read");
right(1234, "/articles/1", "write");
right(1234, "/articles/2", "read");
right(1234, "/articles/2", "write");
// the request is allowed if the user has sufficient rights for the current operation
allow if user($user), right($user, "/articles/1/comments", "write");
This method relies on the authorizer providing the facts to match on the request. It can be extended further by providing more data, like a list of HTTP headers or a cryptographic hash of the body.
Authorization performance
Authorization is likely part of your request handling hot path. As such, it is natural to try and make it as fast as possible.
The first rule of performance optimization
Don't do it
Benchmarks done with biscuit-rust show that the whole process (parsing, signatures verification and datalog generation and evaluation) usually clocks in at around one millisecond. In a lot of cases, that will not be a bottleneck and thus not where you should work on performance optimization. That being said, in some cases you will need to optimize the authorization process, so this page is there to help you do so.
Authorization process breakdown
The authorization process can be broken down in 4 parts:
- parsing;
- signature verification;
- datalog generation;
- datalog evaluation.
Parsing is typically one of the fastest steps; it depends only on the token size. Signature verification is where most of the time is spent; it depends mostly on the number of blocks (and their size). Datalog generation and datalog evaluation happen in tandem. That's the part where you have the most leverage. Datalog generation purely depends on how your application is designed. In many cases it can be done statically and thus have a negligible contribution to the overall runtime. Datalog evaluation depends on the actual datalog code that is evaluated.
Measure
When it comes to performance optimization, the first step is always to measure the execution time of each step. First to determine if optimization is even needed, then to quantify progress. This part entirely depends on your tech stack. You can start with coarse-grained traces telling you how long the whole authorization process takes, and then only dig down if optimization is needed.
Datalog generation is not likely to be the bottleneck in simple cases, with static authorization rules. However, if your datalog generation involves database queries and complex generation logic, then you have optimization opportunities. Large or complex datalog rule sets can take time to evaluate, making datalog evaluation a good target for optimization. There might be a balance between datalog generation and evaluation (i.e. making the datalog generation process more complex in order to simplify evaluation), so optimizations should always be considered over the whole authorization process.
Datalog performance contributors
As stated above, there are a lot of external factors that contribute to the final time and resource costs of the authorization process.
Other things being equal, some elements in datalog code tend to have a disproportionate effect on performance. This section lists the most common ones, in order to help you find the source of slowdowns.
With biscuit-rust, you can see how much time was spent evaluating datalog in an authorizer with Authorizer.execution_time()
. This does not replace performance measurements, but can give you a simple way to compare datalog snippets. Authorizer snapshots carry this information and can be inspected with biscuit-cli through biscuit inspect-snapshot
or with the web inspector.
Number of rules
The number of rules is a direct contributor to evaluation performance. The datalog engine tries to match every rule with every fact to produce new facts, and then tries again with new facts until no new facts are produced.
- authorization contexts with a lot of rules will take more time to compute
- rules generating facts matched by other rule will require more iterations before convergence
Expression evaluation
This part is implementation-dependent, advice applies primarily to the rust implementation.
Rules can contain expressions, that are evaluated after facts are matched. The biscuit specification describes an evaluation strategy based on a stack machine, which aims at providing a fast evaluation.
Expensive operations
Operations on booleans, integers and dates are really simple operations and thus quite fast. Same for string equality, thanks to string interning (comparing two strings for equality is turned into an equality test on integers). Other string operations like prefix / suffix / substring tests are a bit more costly. Regex matching tends to be the worst offender, especially when there are a lot of different regexes. Regex compilation is memoized, so the cost can be amortized when one regex is used to match against several strings. However, if several regexes are matched against a single string, then the regex compilation costs will not be amortized.
Splitting expressions
Expressions are tried in order. If an expression evaluates to false (or fails to evaluate), other expressions are not evaluated. Splitting simple conditions and placing them first allows rules to fail fast by only evaluating complex operations when needed.
Reference
Biscuit, a bearer token with offline attenuation and decentralized verification
Introduction
Biscuit is a bearer token that supports offline attenuation, can be verified by any system that knows the root public key, and provides a flexible caveat language based on logic programming. It is serialized as Protocol Buffers 1, and designed to be small enough for storage in HTTP cookies.
Vocabulary
- Datalog: a declarative logic language that works on facts defining data relationship, rules creating more facts if conditions are met, and queries to test such conditions
- check: a restriction on the kind of operation that can be performed with the token that contains it, represented as a datalog query in biscuit. For the operation to be valid, all of the checks defined in the token and the authorizer must succeed
- allow/deny policies: a list of datalog queries that are tested in a sequence until one of them matches. They can only be defined in the authorizer
- block: a list of datalog facts, rules and checks. The first block is the authority block, used to define the basic rights of a token
- (Verified) Biscuit: a completely parsed biscuit, whose signatures and final proof have been successfully verified
- Unverified Biscuit: a completely parsed biscuit, whose signatures and final proof have not been verified yet. Manipulating unverified biscuits can be useful for generic tooling (eg inspecting a biscuit without knowing its public key)
- Authorized Biscuit: a completely parsed biscuit, whose signatures and final proof
have been successfully verified and that was authorized in a given context, by running
checks and policies.
An authorized biscuit may carry informations about the successful authorization such as the allow query that matched and the facts generated in the process - Authorizer: the context in which a biscuit is evaluated. An authorizer may carry facts, rules, checks and policies.
Overview
A Biscuit token is defined as a series of blocks. The first one, named "authority block", contains rights given to the token holder. The following blocks contain checks that reduce the token's scope, in the form of logic queries that must succeed. The holder of a biscuit token can at any time create a new token by adding a block with more checks, thus restricting the rights of the new token, but they cannot remove existing blocks without invalidating the signature.
The token is protected by public key cryptography operations: the initial creator of a token holds a secret key, and any authorizer for the token needs only to know the corresponding public key. Any attenuation operation will employ ephemeral key pairs that are meant to be destroyed as soon as they are used.
There is also a sealed version of that token that prevents further attenuation.
The logic language used to design rights, checks, and operation data is a variant of datalog that accepts expressions on some data types.
Semantics
A biscuit is structured as an append-only list of blocks, containing checks, and describing authorization properties. As with Macaroons2, an operation must comply with all checks in order to be allowed by the biscuit.
Checks are written as queries defined in a flavor of Datalog that supports expressions on some data types3, without support for negation. This simplifies its implementation and makes the check more precise.
Logic language
Terminology
A Biscuit Datalog program contains facts and rules, which are made of predicates over the following types:
- variable
- integer
- string
- byte array
- date
- boolean
- set a deduplicated list of values of any type, except variable or set
While a Biscuit token does not use a textual representation for storage, we use one for parsing and pretty printing of Datalog elements.
A predicate has the form Predicate(v0, v1, ..., vn)
.
A fact is a predicate that does not contain any variable.
A rule has the form:
Pr(r0, r1, ..., rk) <- P0(t0_1, t0_2, ..., t0_m1), ..., Pn(tn_1, tn_2, ..., tn_mn), E0(v0, ..., vi), ..., Ex(vx, ..., vy)
.
The part of the left of the arrow is called the head and on the right, the
body. In a rule, each of the ri
or ti_j
terms can be of any type. A
rule is safe if all of the variables in the head appear somewhere in the body.
We also define an expression Ex
over the variables v0
to vi
. Expressions
define a test of variable values when applying the rule. If the expression
returns false
, the rule application fails.
A query is a type of rule that has no head. It has the following form:
?- P0(t1_1, t1_2, ..., t1_m1), ..., Pn(tn_1, tn_2, ..., tn_mn), C0(v0), ..., Cx(vx)
.
When applying a rule, if there is a combination of facts that matches the
body's predicates, we generate a new fact corresponding to the head (with the
variables bound to the corresponding values).
A check is a list of query for which the token validation will fail if it cannot produce any fact. A single query needs to match for the fact to succeed. If any of the checks fails, the entire verification fails.
An allow policy or deny policy is a list of query. If any of the queries produces something, the policy matches, and we stop there, otherwise we test the next one. If an allow policy succeeds, the token verification succeeds, while if a deny policy succeeds, the token verification fails. Those policies are tested after all of the checks have passed.
We will represent the various types as follows:
- variable:
$variable
(the variable name is converted to an integer id through the symbol table) - integer:
12
- string:
"hello"
(strings are converted to integer ids through the symbol table) - byte array:
hex:01A2
- date in RFC 3339 format:
1985-04-12T23:20:50.52Z
- boolean:
true
orfalse
- set:
[ "a", "b", "c"]
As an example, assuming we have the following facts: parent("a", "b")
,
parent("b", "c")
, parent("c", "d")
. If we apply the rule
grandparent($x, $z) <- parent($x, $y), parent($y, $z)
, we will try to replace
the predicates in the body by matching facts. We will get the following
combinations:
grandparent("a", "c") <- parent("a", "b"), parent("b", "c")
grandparent("b", "d") <- parent("b", "c"), parent("c", "d")
The system will now contain the two new facts grandparent("a", "c")
and
grandparent("b", "d")
. Whenever we generate new facts, we have to reapply all of
the system's rules on the facts, because some rules might give a new result. Once
rules application does not generate any new facts, we can stop.
Data types
An integer is a signed 64 bits integer. It supports the following operations: lower than, greater than, lower than or equal, greater than or equal, equal, not equal, set inclusion, addition, subtraction, multiplication, division, bitwise and, bitwise or, bitwise xor.
A string is a suite of UTF-8 characters. It supports the following
operations: prefix, suffix, equal, not equal, set inclusion, regular
expression, concatenation (with +
), substring test (with .contains()
).
A byte array is a suite of bytes. It supports the following operations: equal, not equal, set inclusion.
A date is a 64 bit unsigned integer representing a TAI64. It supports
the following operations: <
, <=
(before), >
, >=
(after), equal,
not equal, set inclusion.
A boolean is true
or false
. It supports the following operations:
==
, !=
, ||
, &&
, set inclusion.
A set is a deduplicated list of terms of the same type. It cannot contain variables or other sets. It supports equal, not equal, intersection, union, set inclusion.
Grammar
The logic language is described by the following EBNF grammar:
<origin_clause> ::= <sp>? "trusting " <origin_element> <sp>? ("," <sp>? <orgin_element> <sp>?)*
<origin_element> ::= "authority" | "previous" | <signature_alg> "/" <bytes>
<signature_alg> ::= "ed25519"
<block> ::= (<origin_clause> ";" <sp>?)? (<block_element> | <comment> )*
<block_element> ::= <sp>? ( <check> | <fact> | <rule> ) <sp>? ";" <sp>?
<authorizer> ::= (<authorizer_element> | <comment> )*
<authorizer_element> ::= <sp>? ( <policy> | <check> | <fact> | <rule> ) <sp>? ";" <sp>?
<comment> ::= "//" ([a-z] | [A-Z] ) ([a-z] | [A-Z] | [0-9] | "_" | ":" | " " | "\t" | "(" | ")" | "$" | "[" | "]" )* "\n"
<fact> ::= <name> "(" <sp>? <fact_term> (<sp>? "," <sp>? <fact_term> )* <sp>? ")"
<rule> ::= <predicate> <sp>? "<-" <sp>? <rule_body>
<check> ::= "check" <sp> ( "if" | "all" ) <sp> <rule_body> (<sp>? " or " <sp>? <rule_body>)* <sp>?
<policy> ::= ("allow" | "deny") <sp> "if" <sp> <rule_body> (<sp>? " or " <sp>? <rule_body>)* <sp>?
<rule_body> ::= <rule_body_element> <sp>? ("," <sp>? <rule_body_element> <sp>?)* (<sp> <origin_clause>)?
<rule_body_element> ::= <predicate> | <expression>
<predicate> ::= <name> "(" <sp>? <term> (<sp>? "," <sp>? <term> )* <sp>? ")"
<term> ::= <fact_term> | <variable>
<fact_term> ::= <boolean> | <string> | <number> | ("hex:" <bytes>) | <date> | <set>
<set_term> ::= <boolean> | <string> | <number> | <bytes> | <date>
<number> ::= "-"? [0-9]+
<bytes> ::= ([a-z] | [0-9])+
<boolean> ::= "true" | "false"
<date> ::= [0-9]* "-" [0-9] [0-9] "-" [0-9] [0-9] "T" [0-9] [0-9] ":" [0-9] [0-9] ":" [0-9] [0-9] ( "Z" | ( ("+" | "-") [0-9] [0-9] ":" [0-9] [0-9] ))
<set> ::= "[" <sp>? ( <set_term> ( <sp>? "," <sp>? <set_term>)* <sp>? )? "]"
<expression> ::= <expression_element> (<sp>? <operator> <sp>? <expression_element>)*
<expression_element> ::= <expression_unary> | (<expression_term> <expression_method>? )
<expression_unary> ::= "!" <sp>? <expression>
<expression_method> ::= "." <method_name> "(" <sp>? (<term> ( <sp>? "," <sp>? <term>)* )? <sp>? ")"
<method_name> ::= ([a-z] | [A-Z] ) ([a-z] | [A-Z] | [0-9] | "_" )*
<expression_term> ::= <term> | ("(" <sp>? <expression> <sp>? ")")
<operator> ::= "<" | ">" | "<=" | ">=" | "==" | "!=" | "&&" | "||" | "+" | "-" | "*" | "/" | "&" | "|" | "^"
<sp> ::= (" " | "\t" | "\n")+
The name
, variable
and string
rules are defined as:
name
:- first character is any UTF-8 letter character
- following characters are any UTF-8 letter character, numbers,
_
or:
variable
:- first character is
$
- following characters are any UTF-8 letter character, numbers,
_
or:
- first character is
string
:- first character is
"
- any printable UTF-8 character except
"
which must be escaped as\"
- last character is
"
- first character is
The order of operations in expressions is the following:
- parentheses;
- methods;
*
/
(left associative)+
-
(left associative)&
(left associative)|
(left associative)^
(left associative)<=
>=
<
>
==
(not associative: they have to be combined with parentheses)&&
(left associative)||
(left associative)
Scopes
Since the first block defines the token's rights through facts and rules, and later blocks can define their own facts and rules, we must ensure the token cannot increase its rights with later blocks.
This is done through execution scopes: by default, a block's rules and checks can only apply on facts created in the authority, in the current block or in the authorizer. Rules, checks and policies defined in the authorizer can only apply on facts created in the authority or in the authorizer.
Example:
- the token contains
right("file1", "read")
in the first block - the token holder adds a block with the fact
right("file2", "read")
- the authorizer adds:
resource("file2")
operation("read")
check if resource($res), operation($op), right($res, $op)
The authorizer's check will fail because when it is evaluated, it only sees
right("file1", "read")
from the authority block.
Scope annotations
Rules (and blocks) can specify trusted origins through a special trusting
annotation. By default,
only the current block, the authority block and the authorizer are trusted. This default can be overriden:
- at the block level
- at the rule level (which takes precedence over block-level annotations)
The scope annotation can be a combination of either:
authority
(default behaviour): the authorizer, the current block and the authority one are trusted;previous
(only available in blocks): the authorizer, the current block and the previous blocks (including the authority) are trusted;- a public key: the authorizer, the current block and the blocks carrying an external signature verified by the provided public key are trusted.
previous
is only available in blocks, and is ignored when used in the authorizer.
When there are multiple scope annotations, the trusted origins are added. Note that the current block and the authorizer are always trusted.
This scope annotation is then turned into a set of block ids before evaluation. Authorizer facts and rules are assigned a dedicated block id that's distinct from the authority and from the extra blocks.
Only facts whose origin is a subset of these trusted origins are matched. The authorizer block id and the current block id are always part of these trusted origins.
Checks
Checks are logic queries evaluating conditions on facts. To validate an operation, all of a token's checks must succeed.
One block can contain one or more checks.
Their text representation is check if
or check all
followed by the body of the query.
There can be multiple queries inside of a check, it will succeed if any of them
succeeds. They are separated by a or
token.
- a
check if
query succeeds if it finds one set of facts that matches the body and expressions - a
check all
query succeeds if all the sets of facts that match the body also succeed the expression.check all
can only be used starting from block version 4
Here are some examples of writing checks:
Basic token
This first token defines a list of authority facts giving read
and write
rights on file1
, read
on file2
. The first caveat checks that the operation
is read
(and will not allow any other operation
fact), and then that we have
the read
right over the resource.
The second caveat checks that the resource is file1
.
right("file1", "read");
right("file2", "read");
right("file1", "write");
check if
resource($0),
operation("read"),
right($0, "read"); // restrict to read operations
check if
resource("file1"); // restrict to file1 resource
resource("file1");
operation("read");
The authorizer side provides the resource
and operation
facts with information
from the request.
Here the authorizer provides the facts resource("file1")
and
operation("read")
, both checks succeed.
If the authorizer provided the facts resource("file2")
and
operation("read")
, the rule application of the first check would see
resource("file2"), operation("read"), right("file2", "read")
with X = "file2"
, so it would succeed, but the second check would fail
because it expects resource("file1")
. Try it out!
Broad authority rules
In this example, we have a token with very large rights, that will be attenuated before being given to a user. The authority block can define rules that will generate facts depending on data provided by the authorizer. This helps reduce the size of the token.
// if there is an ambient resource and we own it, we can read it
right($0, "read") <- resource($0), owner($1, $0);
// if there is an ambient resource and we own it, we can write to it
right($0, "write") <- resource($0), owner($1, $0);
check if
right($0, $1),
resource($0),
operation($1);
check if
resource($0),
owner("alice", $0); // defines a token only usable by alice
resource("file1");
operation("read");
owner("alice", "file1");
These rules will define authority facts depending on authorizer data.
Here, we have the facts resource("file1")
and
owner("alice", "file1")
, the authority rules then define
right("file1", "read")
and right("file1", "write")
,
which will allow check 1 and check 2 to succeed.
If the owner ambient fact does not match the restriction in the second check, the token verification will fail.
Allow/deny policies
Allow and deny policies are queries that are tested one by one, after all of the checks have succeeded. If one of them succeeds, we stop there, otherwise we test the next one. If an allow policy succeeds, token verification succeeds, while if a deny policy succeeds, the token verification fails. If none of these policies are present, the verification will fail.
They are written as allow if
or deny if
followed by the body of the query.
Same as for checks, the body of a policy can contain multiple queries, separated
by "or". A single query needs to match for the policy to match.
Expressions
We can define queries or rules with expressions on some predicate values, and restrict usage based on ambient values:
right("/folder/file1", "read");
right("/folder/file2", "read");
right("/folder2/file3", "read");
check if resource($0), right($0, $1);
check if time($0), $0 < 2019-02-05T23:00:00Z; // expiration date
check if source_ip($0), ["1.2.3.4", "5.6.7.8"].contains($0); // set membership
check if resource($0), $0.starts_with("/folder/"); // prefix operation on strings
resource("/folder/file1");
time(2019-02-01T23:00:00Z);
source_ip("1.2.3.4");
Executing an expression must always return a boolean, and all variables appearing in an expression must also appear in other predicates of the rule.
Execution
Expressions are internally represented as a series of opcodes for a stack based virtual machine. There are three kinds of opcodes:
- value: a raw value of any type. If it is a variable, the variable must also appear in a predicate, so the variable gets a real value for execution. When encountering a value opcode, we push it onto the stack
- unary operation: an operation that applies on one argument. When executed, it pops a value from the stack, applies the operation, then pushes the result
- binary operation: an operation that applies on two arguments. When executed, it pops two values from the stack, applies the operation, then pushes the result
After executing, the stack must contain only one value, of the boolean type.
Here are the currently defined unary operations:
- negate: boolean negation
- parens: returns its argument without modification (this is used when printing the expression, to avoid precedence errors)
- length: defined on strings, byte arrays and sets
Here are the currently defined binary operations:
- less than, defined on integers and dates, returns a boolean
- greater than, defined on integers and dates, returns a boolean
- less or equal, defined on integers and dates, returns a boolean
- greater or equal, defined on integers and dates, returns a boolean
- equal, defined on integers, strings, byte arrays, dates, set, returns a boolean
- not equal, defined on integers, strings, byte arrays, dates, set, returns a boolean (v4 only)
- contains takes a set and another value as argument, returns a boolean. Between two sets, indicates if the first set is a superset of the second one. between two strings, indicates a substring test.
- prefix, defined on strings, returns a boolean
- suffix, defined on strings, returns a boolean
- regex, defined on strings, returns a boolean
- add, defined on integers, returns an integer. Defined on strings, concatenates them.
- sub, defined on integers, returns an integer
- mul, defined on integers, returns an integer
- div, defined on integers, returns an integer
- and, defined on booleans, returns a boolean
- or, defined on booleans, returns a boolean
- intersection, defined on sets, return a set that is the intersection of both arguments
- union, defined on sets, return a set that is the union of both arguments
- bitwiseAnd, defined on integers, returns an integer (v4 only)
- bitwiseOr, defined on integers, returns an integer (v4 only)
- bitwiseXor, defined on integers, returns an integer (v4 only)
Integer operations must have overflow checks. If it overflows, the expression fails.
Example
The expression 1 + 2 < 4
will translate to the following opcodes: 1, 2, +, 4, <
Here is how it would be executed:
Op | stack
| [ ]
1 | [ 1 ]
2 | [ 2, 1 ]
+ | [ 3 ]
4 | [ 4, 3 ]
< | [ true ]
The stack contains only one value, and it is true
: the expression succeeds.
Datalog fact generation
Datalog fact generation works by repeatedly extending a Datalog world until no new facts are generated.
A Datalog world is:
- a set of rules, each one tagged by the block id they were defined in
- a set of facts, each one tagged by its origin: the block ids that allowed them to exist
Then, for each rule
- facts are filtered based on their origin, and the scope annotation of the rule
- available facts are matched on the rule predicates; only fact combinations that match every predicate are kept
- rules expressions are computed for every matched combination; only fact combinations for which every expression returns true succeed
- new facts are generated by the rule head, based on the matched variables
A fact defined in a block n
has for origin {n}
(a set containing only n
).
A fact generated by a rule defined in block rule_block_id
that matched on facts fact_0β¦, fact_n
has for origin
Union({rule_block_id}, origin(fact_0) β¦, origin(fact_n))
.
Authorizer
The authorizer provides information on the operation, such as the type of access ("read", "write", etc), the resource accessed, and more ambient data like the current time, source IP address, revocation lists. The authorizer can also provide its own checks. It provides allow and deny policies for the final decision on request validation.
Deserializing the token
The token must first be deserialized according to the protobuf format definition,
of Biscuit
.
The cryptographic signature must be checked immediately after deserializing. The authorizer must check that the public key of the authority block is the root public key it is expecting.
A Biscuit
contains in its authority
and blocks
fields
some byte arrays that must be deserialized as a Block
.
Authorization process
The authorizer will first create a default symbol table, and will append to that table the values
from the symbols
field of each block, starting from the authority
block and all the
following blocks, ordered by their index.
The authorizer will create a Datalog "world", and add to this world its own facts and rules: ambient data from the request, lists of users and roles, etc.
- the facts from the authority block
- the rules from the authority block
- for each following block:
- add the facts from the block.
- add the rules from the block.
Revocation identifiers
The authorizer will generate a list of facts indicating revocation identifiers for
the token. The revocation identifier for a block is its signature (as it uniquely
identifies the block) serialized to a byte array (as in the Protobuf schema).
For each of these if, a fact revocation_id(<index of the block>, <byte array>)
will be generated.
Authorizing
From there, the authorizer can start loading data from each block.
- load facts and rules from every block, tagging each fact and rule with the corresponding block id
- run the Datalog engine on all the facts and rules
- for each check, validate it. If it fails, add an error to the error list
- for each allow/deny policy:
- run the query. If it succeeds:
- if it is an allow policy, the verification succeeds, store the result and stop here
- if it is a deny policy, the verification fails, store the result and stop here
- run the query. If it succeeds:
Returning the result:
- if the error list is not empty, return the error list
- check policy result:
- if an allow policy matched, the verification succeeds
- if a deny policy matched, the verification fails
- if no policy matched, the verification fails
Queries
The authorizer can also run queries over the loaded data. A query is a datalog rule, and the query's result is the produced facts.
TODO: describe error codes
Appending
deserializing
TODO: same as the authorizer, but we do not need to know the root key
Format
The current version of the format is in schema.proto
The token contains two levels of serialization. The main structure that will be transmitted over the wire is either the normal Biscuit wrapper:
message Biscuit {
optional uint32 rootKeyId = 1;
required SignedBlock authority = 2;
repeated SignedBlock blocks = 3;
required Proof proof = 4;
}
message SignedBlock {
required bytes block = 1;
required PublicKey nextKey = 2;
required bytes signature = 3;
optional ExternalSignature externalSignature = 4;
}
message ExternalSignature {
required bytes signature = 1;
required PublicKey publicKey = 2;
}
message PublicKey {
required Algorithm algorithm = 1;
enum Algorithm {
Ed25519 = 0;
}
required bytes key = 2;
}
message Proof {
oneof Content {
bytes nextSecret = 1;
bytes finalSignature = 2;
}
}
The rootKeyId
is a hint to decide which root public key should be used
for signature verification.
Each block contains a serialized byte array of the Datalog data (block
),
the next public key (nextKey
) and the signature of that block and key
by the previous key.
The proof
field contains either the private key corresponding to the
public key in the last block (attenuable tokens) or a signature of the last
block by the private key (sealed tokens).
The block
field is a byte array, containing a Block
structure serialized
in Protobuf format as well:
message Block {
repeated string symbols = 1;
optional string context = 2;
optional uint32 version = 3;
repeated FactV2 facts_v2 = 4;
repeated RuleV2 rules_v2 = 5;
repeated CheckV2 checks_v2 = 6;
repeated Scope scope = 7;
repeated PublicKey publicKeys = 8;
}
Each block contains a version
field, indicating at which format version it
was generated. Since a Biscuit implementation at version N can receive a valid
token generated at version N-1, new implementations must be able to recognize
older formats. Moreover, when appending a new block, they cannot convert the
old blocks to the new format (since that would invalidate the signature). So
each block must carry its own version.
-
An implementation must refuse a token containing blocks with a newer format than the range they know.
-
An implementation must refuse a token containing blocks with an older format than the range they know.
-
An implementation may generate blocks with older formats to help with backwards compatibility, when possible, especially for biscuit versions that are only additive in terms of features.
-
The lowest supported biscuit version is
3
; -
The highest supported biscuit version is
4
;
Version 2
This is the format for the 2.0 version of Biscuit.
It transport expressions as an array of opcodes.
Text format
When transmitted as text, a Biscuit token should be serialized to a
URLS safe base 64 string. When the context does not indicate that it
is a Biscuit token, that base 64 string should be prefixed with biscuit:
.
Cryptography
Biscuit tokens are based on public key cryptography, with a chain of Ed25519 signatures. Each block contains the serialized Datalog, the next public key, and the signature by the previous key. The token also contains the private key corresponding to the last public key, to sign a new block and attenuate the token, or a signature of the last block by the last private key, to seal the token.
Signature (one block)
(pk_0, sk_0)
the root public and private Ed25519 keysdata_0
the serialized Datalog(pk_1, sk_1)
the next key pair, generated at randomalg_1
the little endian representation of the signature algorithm frpk1, sk1
(see protobuf schema)sig_0 = sign(sk_0, data_0 + alg_1 + pk_1)
The token will contain:
Token {
root_key_id: <optional number indicating the root key to use for verification>
authority: Block {
data_0,
pk_1,
sig_0,
}
blocks: [],
proof: Proof {
nextSecret: sk_1,
},
}
Signature (appending)
With a token containing blocks 0 to n:
Block n contains:
data_n
pk_n+1
sig_n
The token also contains sk_n+1
.
The new block can optionally be signed by an external keypair (epk, esk)
and carry an external signature esig
.
We generate at random (pk_n+2, sk_n+2)
and the signature sig_n+1 = sign(sk_n+1, data_n+1 + esig? + alg_n+2 + pk_n+2)
. If the block is not signed by an external keypair, then esig
is not part of the signed payload.
The token will contain:
Token {
root_key_id: <optional number indicating the root key to use for verification>
authority: Block_0,
blocks: [Block_1, .., Block_n,
Block_n+1 {
data_n+1,
pk_n+2,
sig_n+1,
epk?, esig?
}]
proof: Proof {
nextSecret: sk_n+2,
},
}
Optional external signature
Blocks generated by a trusted third party can carry an extra signature to provide a proof of their origin. Same as regular signatures, they rely on Ed25519.
The external signature for block n+1
, with (external_pk, external_sk)
is external_sig_n+1 = sign(external_sk, data_n+1 + alg_n+1 + pk_n+1)
.
It's quite similar to the regular signature, with a crucial difference: the public key appended to the block payload is the one carried by block n
(and which is used to verify block n+1
).
This means that the authority block can't carry an external signature (that would be useless, since
the root key is not ephemeral and can be trusted directly).
This is necessary to make sure an external signature can't be used for any other token.
The presence of an external signature affects the regular signature: the external signature is part of the payload signed by the regular signature.
The token will contain:
Token {
root_key_id: <optional number indicating the root key to use for verification>
authority: Block_0,
blocks: [Block_1, .., Block_n,
Block_n+1 {
data_n+1,
pk_n+2,
sig_n+1,
external_pk,
external_sig_n+1
}]
proof: Proof {
nextSecret: sk_n+2,
},
}
Verifying
For each block i from 0 to n:
- verify(pk_i, sig_i, data_i + alg_i+1 + pk_i+1)
If all signatures are verified, extract pk_n+1 from the last block and sk_n+1 from the proof field, and check that they are from the same key pair.
Verifying external signatures
For each block i from 1 to n, where an external signature is present:
- verify(external_pk_i, external_sig_i, data_i + alg_i + pk_i)
Signature (sealing)
With a token containing blocks 0 to n:
Block n contains:
data_n
pk_n+1
sig_n
The token also contains sk_n+1
We generate the signature sig_n+1 = sign(sk_n+1, data_n + alg_n+1 + pk_n+1 + sig_n)
(we sign
the last block and its signature with the last private key).
The token will contain:
Token {
root_key_id: <optional number indicating the root key to use for verification>
authority: Block_0,
blocks: [Block_1, .., Block_n]
proof: Proof {
finalSignature: sig_n+1
},
}
Verifying (sealed)
For each block i from 0 to n:
- verify(pk_i, sig_i, data_i+alg_i+1+pk_i+1)
If all signatures are verified, extract pk_n+1 from the last block and
sig from the proof field, and check verify(pk_n+1, sig_n+1, data_n+alg_n+1+pk_n+1+sig_n)
Blocks
A block is defined as follows in the schema file:
message Block {
repeated string symbols = 1;
optional string context = 2;
optional uint32 version = 3;
repeated FactV2 facts_v2 = 4;
repeated RuleV2 rules_v2 = 5;
repeated CheckV2 checks_v2 = 6;
repeated Scope scope = 7;
repeated PublicKey publicKeys = 8;
}
The block index is incremented for each new block. The Block 0 is the authority block.
Each block can provide facts either from its facts list, or generate them with its rules list.
Symbol table
To reduce the token size and improve performance, Biscuit uses a symbol table, a list of strings that any fact or token can refer to by index. While running the logic engine does not need to know the content of that list, pretty printing facts, rules and results will use it.
The symbol table is created from a default table containing, in order:
- read
- write
- resource
- operation
- right
- time
- role
- owner
- tenant
- namespace
- user
- team
- service
- admin
- group
- member
- ip_address
- client
- client_ip
- domain
- path
- version
- cluster
- node
- hostname
- nonce
- query
Symbol table indexes from 0 to 1023 are reserved for the default symbols. Symbols defined in a token or authorizer must start from 1024.
Adding content to the symbol table
Regular blocks (no external signature)
When creating a new block, we start from the current symbol table of the token. For each fact or rule that introduces a new symbol, we add the corresponding string to the table, and convert the fact or rule to use its index instead.
Once every fact and rule has been integrated, we set as the block's symbol table
(its symbols
field) the symbols that were appended to the token's table.
The new token's symbol table is the list from the default table, and for each block in order, the block's symbols.
It is important to verify that different blocks do not contain the same symbol in their list.
3rd party blocks (with an external signature)
Blocks that are signed by an external key don't use the token symbol table
and start from the default symbol table. Following blocks ignore the symbols
declared in their symbols
field.
The reason for this is that the party signing the block is not supposed to have access to the token itself and can't use the token's symbol table.
Public key tables
Public keys carried in SignedBlock
are stored as is, as they are required for verification.
Public keys carried in datalog scope annotations are stored in a table, to reduce token size.
Public keys are interned the same way for first-party and third-party tokens, unlike symbols.
Reading
Building a symbol table for a token can be done this way:
for each block:
- add the external public key if defined (and if not already present)
- add the contents of the
publicKeys
field of theBlock
message
It is important to only add the external public key if it's not already present, to avoid having it twice in the symbol table.
Appending
Same as for symbols, the publicKeys
field should only contain public keys
that were not present in the table yet.
Appending a third-party block
Third party blocks are special blocks, that are meant to be signed by a trusted party, to either expand a token or fulfill special checks with dedicated public key constraints.
Unlike first-party blocks, the party signing the token should not have access to the token itself. The third party needs however some context in order to be able to properly serialize and sign block contents. Additionally, the third party needs to return both the serialized block and the external signature.
To support this use-case, the protobuf schema defines two message types: ThirdPartyBlockRequest
and ThirdPartyBlockContents
:
message ThirdPartyBlockRequest {
required PublicKey previousKey = 1;
repeated PublicKey publicKeys = 2;
}
message ThirdPartyBlockContents {
required bytes payload = 1;
required ExternalSignature externalSignature = 2;
}
ThirdPartyBlockRequest
contains the necessary context for serializing and signing a datalog block:
previousKey
is needed for the signature (it makes sure that a third-party block can only be used for a specific biscuit tokenpublicKeys
is the list of public keys already present in the token table; they are used for serialization
ThirdPartyBlockContents
contains both the serialized Block
and the external signature.
The expected sequence is
- the token holder generates a
ThirdPartyBlockRequest
from their token; - they send it, along with domain-specific information, to the third party that's responsible for providing a third-party block;
- the third party creates a datalog block (based on domain-specific information), serializes it and signs it, and returns
a
ThirdPartyBlockContents
to the token holder - the token holder now uses
ThirdPartyBlockContents
to append a new signed block to the token
An implementation must be able to:
- generate a
ThirdPartyBlockRequest
from a token (by extracting its last ephemeral public key and its public key table) - apply a
ThirdPartyBlockContents
on a token by appending the serialized block like a regular block
Same as biscuit tokens, the ThirdPartyBlockRequest
and ThirdPartyBlockContents
values can be transfered in text format
by encoding them with base64url.
Test cases
We provide sample tokens and the expected result of their verification at https://github.com/biscuit-auth/biscuit/tree/master/samples
References
- "Trust Management Languages" https://www.cs.purdue.edu/homes/ninghui/papers/cdatalog_padl03.pdf
ProtoBuf https://developers.google.com/protocol-buffers/ 3: "Datalog with Constraints: A Foundation for Trust Management Languages" http://crypto.stanford.edu/~ninghui/papers/cdatalog_padl03.pdf 2: "Macaroons: Cookies with Contextual Caveats for Decentralized Authorization in the Cloud" https://ai.google/research/pubs/pub41892
Cryptography
Biscuit uses public key cryptography to build its tokens: the private key is required to create a token, and must be kept safe. The public key can be distributed, and is needed to verify a token.
Specifically, it uses the Ed25519 algorithm.
A public key signature proves that the signed data has not been modified. So how does Biscuit implement attenuation, where a new valid token can be created from an existing one?
The token uses a scheme inspired from public key infrastructure, like TLS certificates. It is made of a list of blocks, each of them containing data, a signature and a public key. When creating the token, we generate a random key pair. The root private key is used to sign the data and the new public key.
The token then contains one block, and a final proof:
- first block:
- data
- new public key
- signature
- proof:
- new private key
To verify that token, we need to know the root public key. With that key, we check the signature of the first block. Then we take the public key from the first block, and verify that it matches the private key from the proof. Any attempt at tampering with the first block would invalidate the signature. Changing the private key in the proof does not affect signed data, and would be detected during verification anyway.
That first block is called the authority block: it is the only one signed by the root private key, it is trusted by the authorizer side to define the token's basic rights. Any following block added during attenuation could have been created by anyone, so they can only restrict rights, by using checks written in Datalog.
Attenuation
If we have a valid token, to create a new one, we copy all the blocks, get the private key from the proof, generate a new random key pair, sign the data and the new public key using the private key from the previous token.
The token now contains:
- all the blocks from the previous token except the last one
- new block:
- data
- new public key
- signature
- proof:
- new private key
To verify that token, we proceed as previously, using the root public key to check the signature of the first block, then the public key from the first block to check the signature of the second block, up until the last block. And then we verify that the private key from the proof matches the public key from the last block.
If any block was modified, it would be detected by signature verification, as it would not match the data. If any block was removed, it would be detected by signature verification too, as the public key would not match the signature.
Sealed tokens
It is possible to seal a token, making sure that it cannot be attenuated anymore. In that scheme, we create a new token, again by copying the blocks from the existing one, and using the private key from the proof, generate a new proof containing a signature of the last data block (including the signature). This proves that we had access to the last private key.
Datalog
Facts
In Datalog, data is represented by facts. They come in the format fact_name(42, "string")
. The fact has a name that indicates the "type", and between parenthesis, a tuple of data. Facts could be seen as rows in a relational database.
All of the tasks around Datalog consists in selecting data from facts, and generating new ones.
Namespacing
Fact names can contain colons (:
). While they donβt mean anything particular to the datalog engine, they are meant as a namespace
separator: when your rules start to grow, or if you want to provide reusable rules that donβt clash with others, you can namespace
your datalog facts and rules:
service_a:fact_name(42);
Data types
A fact contains data of the following types:
- integer: 64 bits signed integers
12
- string: UTF-8 strings
"string"
- byte array: represented as hexadecimal in the text format
hex:01A2
- date: in RFC 3339 format:
1985-04-12T23:20:50.52Z
- boolean:
true
orfalse
- set: a deduplicated list of values of any type (except set)
[ "a", "b", "c"]
Rules
Rules are used to generate new facts from existing ones. They specify a pattern to select facts and extract data from them.
When we execute the rule right($resource, "write") <- user($user_id), owner($user_id, $resource)
, we will look at all the user
facts, and for each one, look at the owner
facts with a matching $user_id
value, select the second element from the fact with the $resource
variable, and create a new fact from it.
right($resource, "write") <- user($user_id), owner($user_id, $resource);
user(1);
owner(1, "file1.txt");
owner(1, "file2.txt");
owner(2, "file3.txt");
allow if true;
A rule contains data of the following types:
- variable:
$variable
- integer: 64 bits signed integers
12
- string: UTF-8 strings
"string"
- byte array: represented as hexadecimal in the text format
hex:01A2
- date: in RFC 3339 format:
1985-04-12T23:20:50.52Z
- boolean:
true
orfalse
- set: a deduplicated list of values of any type (except set or variable)
[ "a", "b", "c"]
Expressions
Rules filter data by matching between facts, but also by putting constraints on the variables. We could add a path prefix constraint to our previous rule like this: right($resource, "write") <- user($user_id), owner($user_id, $resource), $resource.starts_with("/folder1/")
Expressions return a boolean. If all the expressions in a rule return true for a selection of facts, it will produce a new fact.
Expressions can use the following operations:
Unary operations
Here are the currently defined unary operations:
- parens: returns its argument without modification :
1 + ( 2 + 3 )
- negate: boolean negation
!( 1 < 2 )
- length: defined on strings, byte arrays and sets, returns an int
"hello".length()
Binary operations
Here are the currently defined binary operations:
- less than, defined on integers and dates, returns a boolean
<
- greater than, defined on integers and dates, returns a boolean
>
- less or equal, defined on integers and dates, returns a boolean
<=
- greater or equal, defined on integers and dates, returns a boolean
>=
- equal, defined on integers, strings, byte arrays, dates, set, returns a boolean
==
- contains takes either:
- a set and another value as argument, returns a boolean. Between two sets, indicates if the first set is a superset of the second one
$set.contains(1)
- two strings, and returns a boolean, indicating if the second string is a substring of the first
"a long string".contains("long")
- a set and another value as argument, returns a boolean. Between two sets, indicates if the first set is a superset of the second one
- prefix, defined on strings, returns a boolean
$str.starts_with("hello")
- suffix, defined on strings, returns a boolean
$str.ends_with("world")
- regex, defined on strings, returns a boolean
$str.matches("ab?c")
- add, defined:
- on integers, returns an integer
+
- on strings, concatenates two strings
"a long" + " string"
- on integers, returns an integer
- sub, defined on integers, returns an integer
-
- mul, defined on integers, returns an integer
*
- div, defined on integers, returns an integer
/
- and, defined on booleans, returns a boolean
&&
- or, defined on booleans, returns a boolean
||
- intersection, defined on sets, return a set that is the intersection of both arguments
$set.intersection([1, 2])
- union, defined on sets, return a set that is the union of both arguments
$set.union([1, 2])
Checks and allow/deny policies
Datalog authorization is enforced by checks and allow/deny policies. All the checks will be evaluated, and if one of them does not validate, the request will be rejected. Policies are evaluated one by one, in the order specified by the authorizer, stopping at the first that triggers. If it was a deny policy, the request will be rejected. If it was an allow policy, and all checks passed, the request will be accepted. If no policy matched, the request is rejected.
They have a format similar to rules:
user("admin");
right("file1.txt", "read");
// check
check if right("file1.txt", "read");
// allow policy
allow if user("admin");
// deny policy
deny if true;
Block scoping
Offline attenuation means that the token holder can freely add extra blocks to a token. The datalog engine is designed to ensure that adding a block can only restrict what a token can do, and never extend it.
The main purpose of an attenuation block is to add checks that depend on facts defined by the authorizer.
To achieve that, facts are scoped; to each fact is associated its origin: the block that defined the check, or for facts generated by rules, the block of the rule, along with the block of all the facts matched by the rule body.
By default (ie. when not using trusting
annotations), a rule, check or policy only trusts (considers) facts defined:
- in the authority block;
- in the authorizer;
- in the same block (for rules defined in attenuation blocks).
This model guarantees that adding a block can only restrict what a token can do: by default, the only effect of adding a block to a token is to add new checks.
// the token emitter grants read access to file1
right("file1", "read");
// the authority block trusts facts from itself and the authorizer
check if action("read");
right("file2", "read");
// blocks trust facts from the authority block and the authorizer
check if action("read");
// blocks trust their own facts
check if right("file2", "read");
resource("file1");
action("read");
// the authorizer does not trust facts from additional blocks
check if right("file2", "read");
// the authorizer trusts facts from the authority block
check if right("file1", "read");
allow if true;
Scope annotations and third-party blocks
A rule body (the right-hand side of a <-
) can specify a scope annotation, to change the default scoping behaviour. By default, only facts from the current block, the authorizer and the authority block are considered. Not adding a scope annotation is equivalent to adding trusting authority
(the authorizer and current block are always trusted, even with a scope annotation).
Scope annotations are useful when working with third-party blocks: given a third-party block signed by a specific keypair, it is possible to use trusting {public_key}
to trust facts coming from this block.
// the token emitter grants read access to file1
right("file1", "read");
// the authority block trusts facts from itself and the authorizer
check if action("read");
right("file2", "read");
// blocks trust facts from the authority block and the authorizer
check if action("read");
// blocks trust their own facts
check if right("file2", "read");
resource("file1");
action("read");
// by default the authorizer trusts facts from the authority block
check if right("file1", "read");
check if right("file1", "read") trusting authority; // same as without the annotation
// the authorizer trusts facts from blocks signed by specific keys, when asked
check if right("file2", "read") trusting ed25519/b2d798062e2ac0d383ed8f75980959bcc0cc2fec8ebe0c77fbe8697dcc552946;
// the authorizer doesn't trust facts from the authority block, when not asked:
// there is a scope annotation, but it does not mention authority
check if right("file1", "read") trusting ed25519/b2d798062e2ac0d383ed8f75980959bcc0cc2fec8ebe0c77fbe8697dcc552946;
// the authorizer does not trust facts from additional blocks by default
check if right("file2", "read");
allow if true;